PMAP(9) Kernel Developer's Manual PMAP(9)

NAME

pmapmachine-dependent portion of the virtual memory system

SYNOPSIS

#include <sys/param.h>
#include <uvm/uvm_extern.h>

void
pmap_init(void);

void
pmap_virtual_space(vaddr_t *vstartp, vaddr_t *vendp);

vaddr_t
pmap_steal_memory(vsize_t size, vaddr_t *vstartp, vaddr_t *vendp);

pmap_t
pmap_kernel(void);

pmap_t
pmap_create(void);

void
pmap_destroy(pmap_t pmap);

void
pmap_reference(pmap_t pmap);

void
pmap_fork(pmap_t src_map, pmap_t dst_map);

long
pmap_resident_count(pmap_t pmap);

long
pmap_wired_count(pmap_t pmap);

vaddr_t
pmap_growkernel(vaddr_t maxkvaddr);

int
pmap_enter(pmap_t pmap, vaddr_t va, paddr_t pa, vm_prot_t prot, u_int flags);

void
pmap_remove(pmap_t pmap, vaddr_t sva, vaddr_t eva);

void
pmap_remove_all(pmap_t pmap);

void
pmap_protect(pmap_t pmap, vaddr_t sva, vaddr_t eva, vm_prot_t prot);

void
pmap_unwire(pmap_t pmap, vaddr_t va);

bool
pmap_extract(pmap_t pmap, vaddr_t va, paddr_t *pap);

void
pmap_kenter_pa(vaddr_t va, paddr_t pa, vm_prot_t prot, u_int flags);

void
pmap_kremove(vaddr_t va, vsize_t size);

void
pmap_copy(pmap_t dst_map, pmap_t src_map, vaddr_t dst_addr, vsize_t len, vaddr_t src_addr);

void
pmap_update(pmap_t pmap);

void
pmap_activate(struct lwp *l);

void
pmap_deactivate(struct lwp *l);

void
pmap_zero_page(paddr_t pa);

void
pmap_copy_page(paddr_t src, paddr_t dst);

void
pmap_page_protect(struct vm_page *pg, vm_prot_t prot);

bool
pmap_clear_modify(struct vm_page *pg);

bool
pmap_clear_reference(struct vm_page *pg);

bool
pmap_is_modified(struct vm_page *pg);

bool
pmap_is_referenced(struct vm_page *pg);

paddr_t
pmap_phys_address(paddr_t cookie);

vaddr_t
PMAP_MAP_POOLPAGE(paddr_t pa);

paddr_t
PMAP_UNMAP_POOLPAGE(vaddr_t va);

void
PMAP_PREFER(vaddr_t hint, vaddr_t *vap, vsize_t sz, int td);

DESCRIPTION

The pmap module is the machine-dependent portion of the NetBSD virtual memory system uvm(9). The purpose of the pmap module is to manage physical address maps, to program the memory management hardware on the system, and perform any cache operations necessary to ensure correct operation of the virtual memory system. The pmap module is also responsible for maintaining certain information required by uvm(9).

In order to cope with hardware architectures that make the invalidation of virtual address mappings expensive (e.g., TLB invalidations, TLB shootdown operations for multiple processors), the pmap module is allowed to delay mapping invalidation or protection operations until such time as they are actually necessary. The functions that are allowed to delay such actions are pmap_enter(), pmap_remove(), pmap_protect(), pmap_kenter_pa(), and pmap_kremove(). Callers of these functions must use the pmap_update() function to notify the pmap module that the mappings need to be made correct. Since the pmap module is provided with information as to which processors are using a given physical map, the pmap module may use whatever optimizations it has available to reduce the expense of virtual-to-physical mapping synchronization.

HEADER FILES AND DATA STRUCTURES

Machine-dependent code must provide the header file <machine/pmap.h>. This file contains the definition of the pmap structure:

struct pmap { 
        /* Contents defined by pmap implementation. */ 
}; 
typedef struct pmap *pmap_t;

This header file may also define other data structures that the pmap implementation uses.

Note that all prototypes for pmap interface functions are provided by the header file <uvm/uvm_pmap.h>. It is possible to override this behavior by defining the C pre-processor macro PMAP_EXCLUDE_DECLS. This may be used to add a layer of indirection to pmap API calls, for handling different MMU types in a single pmap module, for example. If the PMAP_EXCLUDE_DECLS macro is defined, <machine/pmap.h> must provide function prototypes in a block like so:

#ifdef _KERNEL /* not exposed to user namespace */ 
__BEGIN_DECLS  /* make safe for C++ */ 
/* Prototypes go here. */ 
__END_DECLS 
#endif /* _KERNEL */

The header file <uvm/uvm_pmap.h> defines a structure for tracking pmap statistics (see below). This structure is defined as:

struct pmap_statistics { 
        long        resident_count; /* number of mapped pages */ 
        long        wired_count;    /* number of wired pages */ 
};

WIRED MAPPINGS

The pmap module is based on the premise that all information contained in the physical maps it manages is redundant. That is, physical map information may be “forgotten” by the pmap module in the event that it is necessary to do so; it can be rebuilt by uvm(9) by taking a page fault. There is one exception to this rule: so-called “wired” mappings may not be forgotten. Wired mappings are those for which either no high-level information exists with which to rebuild the mapping, or mappings which are needed by critical sections of code where taking a page fault is unacceptable. Information about which mappings are wired is provided to the pmap module when a mapping is established.

MODIFIED/REFERENCED INFORMATION

The pmap module is required to keep track of whether or not a page managed by the virtual memory system has been referenced or modified. This information is used by uvm(9) to determine what happens to the page when scanned by the pagedaemon.

Many CPUs provide hardware support for tracking modified/referenced information. However, many CPUs, particularly modern RISC CPUs, do not. On CPUs which lack hardware support for modified/referenced tracking, the pmap module must emulate it in software. There are several strategies for doing this, and the best strategy depends on the CPU.

The “referenced” attribute is used by the pagedaemon to determine if a page is “active”. Active pages are not candidates for re-use in the page replacement algorithm. Accurate referenced information is not required for correct operation; if supplying referenced information for a page is not feasible, then the pmap implementation should always consider the “referenced” attribute to be false.

The “modified” attribute is used by the pagedaemon to determine if a page needs to be cleaned (written to backing store; swap space, a regular file, etc.). Accurate modified information must be provided by the pmap module for correct operation of the virtual memory system.

Note that modified/referenced information is only tracked for pages managed by the virtual memory system (i.e., pages for which a vm_page structure exists). In addition, only “managed” mappings of those pages have modified/referenced tracking. Mappings entered with the pmap_enter() function are “managed” mappings. It is possible for “unmanaged” mappings of a page to be created, using the pmap_kenter_pa() function. The use of “unmanaged” mappings should be limited to code which may execute in interrupt context (for example, the kernel memory allocator), or to enter mappings for physical addresses which are not managed by the virtual memory system. “Unmanaged” mappings may only be entered into the kernel's virtual address space. This constraint is placed on the callers of the pmap_kenter_pa() and pmap_kremove() functions so that the pmap implementation need not block interrupts when manipulating data structures or holding locks.

Also note that the modified/referenced information must be tracked on a per-page basis; they are not attributes of a mapping, but attributes of a page. Therefore, even after all mappings for a given page have been removed, the modified/referenced information for that page must be preserved. The only time the modified/referenced attributes may be cleared is when the virtual memory system explicitly calls the pmap_clear_modify() and pmap_clear_reference() functions. These functions must also change any internal state necessary to detect the page being modified or referenced again after the modified or referenced state is cleared. (Prior to NetBSD 1.6, pmap implementations could get away without this because UVM (and Mach VM before that) always called pmap_page_protect() before clearing the modified or referenced state, but UVM has been changed to not do this anymore, so all pmap implementations must now handle this.)

STATISTICS

The pmap is required to keep statistics as to the number of “resident” pages and the number of “wired” pages.

A “resident” page is one for which a mapping exists. This statistic is used to compute the resident size of a process and enforce resource limits. Only pages (whether managed by the virtual memory system or not) which are mapped into a physical map should be counted in the resident count.

A “wired” page is one for which a wired mapping exists. This statistic is used to enforce resource limits.

Note that it is recommended (though not required) that the pmap implementation use the pmap_statistics structure in the tracking of pmap statistics by placing it inside the pmap structure and adjusting the counts when mappings are established, changed, or removed. This avoids potentially expensive data structure traversals when the statistics are queried.

REQUIRED FUNCTIONS

This section describes functions that a pmap module must provide to the virtual memory system.
void pmap_init(void)
This function initializes the pmap module. It is called by uvm_init() to initialize any data structures that the module needs to manage physical maps.
pmap_t pmap_kernel(void)
A machine independent macro which expands to kernel_pmap_ptr. This variable must be exported by the platform's pmap module and it must point to the kernel pmap.
void pmap_virtual_space(vaddr_t *vstartp, vaddr_t *vendp)
The pmap_virtual_space() function is called to determine the initial kernel virtual address space beginning and end. These values are used to create the kernel's virtual memory map. The function must set *vstartp to the first kernel virtual address that will be managed by uvm(9), and must set *vendp to the last kernel virtual address that will be managed by uvm(9).

If the pmap_growkernel() feature is used by a pmap implementation, then *vendp should be set to the maximum kernel virtual address allowed by the implementation. If pmap_growkernel() is not used, then *vendp must be set to the maximum kernel virtual address that can be mapped with the resources currently allocated to map the kernel virtual address space.

pmap_t pmap_create(void)
Create a physical map and return it to the caller. The reference count on the new map is 1.
void pmap_destroy(pmap_t pmap)
Drop the reference count on the specified physical map. If the reference count drops to 0, all resources associated with the physical map are released and the physical map destroyed. In the case of a drop-to-0, no mappings will exist in the map. The pmap implementation may assert this.
void pmap_reference(pmap_t pmap)
Increment the reference count on the specified physical map.
long pmap_resident_count(pmap_t pmap)
Query the “resident pages” statistic for pmap.

Note that this function may be provided as a C pre-processor macro.

long pmap_wired_count(pmap_t pmap)
Query the “wired pages” statistic for pmap.

Note that this function may be provided as a C pre-processor macro.

int pmap_enter(pmap_t pmap, vaddr_t va, paddr_t pa, vm_prot_t prot, u_int flags)
Create a mapping in physical map pmap for the physical address pa at the virtual address va with protection specified by bits in prot:
VM_PROT_READ
The mapping must allow reading.
VM_PROT_WRITE
The mapping must allow writing.
VM_PROT_EXECUTE
The page mapped contains instructions that will be executed by the processor.

The flags argument contains protection bits (the same bits as used in the prot argument) indicating the type of access that caused the mapping to be created. This information may be used to seed modified/referenced information for the page being mapped, possibly avoiding redundant faults on platforms that track modified/referenced information in software. Other information provided by flags:

PMAP_WIRED
The mapping being created is a wired mapping.
PMAP_CANFAIL
The call to pmap_enter() is allowed to fail. If this flag is not set, and the pmap_enter() call is unable to create the mapping, perhaps due to insufficient resources, the pmap module must panic.
PMAP_NOCACHE
The mapping being created is not cached. Write accesses have a write-through policy. No speculative memory accesses.
PMAP_WRITE_COMBINE
The mapping being created is not cached. Writes are combined and done in one burst. Speculative read accesses may be allowed.
PMAP_WRITE_BACK
All accesses to the created mapping are cached. On reads, cachelines become shared or exclusive if allocated on cache miss. On writes, cachelines become modified on a cache miss.
PMAP_NOCACHE_OVR
Same as PMAP_NOCACHE but mapping is overrideable (e.g. on x86 by MTRRs).

The access type provided in the flags argument will never exceed the protection specified by prot. The pmap implementation may assert this. Note that on systems that do not provide hardware support for tracking modified/referenced information, modified/referenced information for the page must be seeded with the access type provided in flags if the PMAP_WIRED flag is set. This is to prevent a fault for the purpose of tracking modified/referenced information from occurring while the system is in a critical section where a fault would be unacceptable.

Note that pmap_enter() is sometimes called to enter a mapping at a virtual address for which a mapping already exists. In this situation, the implementation must take whatever action is necessary to invalidate the previous mapping before entering the new one.

Also note that pmap_enter() is sometimes called to change the protection for a pre-existing mapping, or to change the “wired” attribute for a pre-existing mapping.

The pmap_enter() function returns 0 on success or an error code indicating the mode of failure.

void pmap_remove(pmap_t pmap, vaddr_t sva, vaddr_t eva)
Remove mappings from the virtual address range sva to eva from the specified physical map.
void pmap_remove_all(pmap_t pmap)
This function is a hint to the pmap implementation that all entries in pmap will be removed before any more entries are entered. Following this call, there will be pmap_remove() calls resulting in every mapping being removed, followed by either pmap_destroy() or pmap_update(). No other pmap interfaces which take pmap as an argument will be called during this process. Other interfaces which might need to access pmap (such as pmap_page_protect()) are permitted during this process.

The pmap implementation is free to either remove all the pmap's mappings immediately in pmap_remove_all(), or to use the knowledge of the upcoming pmap_remove() calls to optimize the removals (or to just ignore this call).

void pmap_protect(pmap_t pmap, vaddr_t sva, vaddr_t eva, vm_prot_t prot)
Set the protection of the mappings in the virtual address range sva to eva in the specified physical map.
void pmap_unwire(pmap_t pmap, vaddr_t va)
Clear the “wired” attribute on the mapping for virtual address va.
bool pmap_extract(pmap_t pmap, vaddr_t va, paddr_t *pap)
This function extracts a mapping from the specified physical map. It serves two purposes: to determine if a mapping exists for the specified virtual address, and to determine what physical address is mapped at the specified virtual address. The pmap_extract() should return the physical address for any kernel-accessible address, including KSEG-style direct-mapped kernel addresses.

The pmap_extract() function returns false if a mapping for va does not exist. Otherwise, it returns true and places the physical address mapped at va into *pap if the pap argument is non-NULL.

void pmap_kenter_pa(vaddr_t va, paddr_t pa, vm_prot_t prot, u_int flags)
Enter an “unmanaged” mapping for physical address pa at virtual address va with protection specified by bits in prot:
VM_PROT_READ
The mapping must allow reading.
VM_PROT_WRITE
The mapping must allow writing.
VM_PROT_EXECUTE
The page mapped contains instructions that will be executed by the processor.

Information provided by flags:

PMAP_NOCACHE
The mapping being created is not cached. Write accesses have a write-through policy. No speculative memory accesses.
PMAP_WRITE_COMBINE
The mapping being created is not cached. Writes are combined and done in one burst. Speculative read accesses may be allowed.
PMAP_WRITE_BACK
All accesses to the created mapping are cached. On reads, cachelines become shared or exclusive if allocated on cache miss. On writes, cachelines become modified on a cache miss.
PMAP_NOCACHE_OVR
Same as PMAP_NOCACHE but mapping is overrideable (e.g. on x86 by MTRRs).

Mappings of this type are always “wired”, and are unaffected by routines that alter the protection of pages (such as pmap_page_protect()). Such mappings are also not included in the gathering of modified/referenced information about a page. Mappings entered with pmap_kenter_pa() by machine-independent code must not have execute permission, as the data structures required to track execute permission of a page may not be available to pmap_kenter_pa(). Machine-independent code is not allowed to enter a mapping with pmap_kenter_pa() at a virtual address for which a valid mapping already exists. Mappings created with pmap_kenter_pa() may be removed only with a call to pmap_kremove().

Note that pmap_kenter_pa() must be safe for use in interrupt context. splvm() blocks interrupts that might cause pmap_kenter_pa() to be called.

void pmap_kremove(vaddr_t va, vsize_t size)
Remove all mappings starting at virtual address va for size bytes from the kernel physical map. All mappings that are removed must be the “unmanaged” type created with pmap_kenter_pa(). The implementation may assert this.
void pmap_copy(pmap_t dst_map, pmap_t src_map, vaddr_t dst_addr, vsize_t len, vaddr_t src_addr)
This function copies the mappings starting at src_addr in src_map for len bytes into dst_map starting at dst_addr.

Note that while this function is required to be provided by a pmap implementation, it is not actually required to do anything. pmap_copy() is merely advisory (it is used in the fork(2) path to “pre-fault” the child's address space).

void pmap_update(pmap_t pmap)
This function is used to inform the pmap module that all physical mappings, for the specified pmap, must now be correct. That is, all delayed virtual-to-physical mappings updates (such as TLB invalidation or address space identifier updates) must be completed. This routine must be used after calls to pmap_enter(), pmap_remove(), pmap_protect(), pmap_kenter_pa(), and pmap_kremove() in order to ensure correct operation of the virtual memory system.

If a pmap implementation does not delay virtual-to-physical mapping updates, pmap_update() has no operation. In this case, the call may be deleted using a C pre-processor macro in <machine/pmap.h>.

void pmap_activate(struct lwp *l)
Activate the physical map used by the process behind lwp l. This is called by the virtual memory system when the virtual memory context for a process is changed, and is also often used by machine-dependent context switch code to program the memory management hardware with the process's page table base, etc. Note that pmap_activate() may not always be called when l is the current lwp. pmap_activate() must be able to handle this scenario.
void pmap_deactivate(struct lwp *l)
Deactivate the physical map used by the process behind lwp l. It is generally used in conjunction with pmap_activate(). Like pmap_activate(), pmap_deactivate() may not always be called when l is the current lwp.
void pmap_zero_page(paddr_t pa)
Zero the PAGE_SIZE sized region starting at physical address pa. The pmap implementation must take whatever steps are necessary to map the page to a kernel-accessible address and zero the page. It is suggested that implementations use an optimized zeroing algorithm, as the performance of this function directly impacts page fault performance. The implementation may assume that the region is PAGE_SIZE aligned and exactly PAGE_SIZE bytes in length.

Note that the cache configuration of the platform should also be considered in the implementation of pmap_zero_page(). For example, on systems with a physically-addressed cache, the cache load caused by zeroing the page will not be wasted, as the zeroing is usually done on-demand. However, on systems with a virtually-addressed cached, the cache load caused by zeroing the page will be wasted, as the page will be mapped at a virtual address which is different from that used to zero the page. In the virtually-addressed cache case, care should also be taken to avoid cache alias problems.

void pmap_copy_page(paddr_t src, paddr_t dst)
Copy the PAGE_SIZE sized region starting at physical address src to the same sized region starting at physical address dst. The pmap implementation must take whatever steps are necessary to map the source and destination pages to a kernel-accessible address and perform the copy. It is suggested that implementations use an optimized copy algorithm, as the performance of this function directly impacts page fault performance. The implementation may assume that both regions are PAGE_SIZE aligned and exactly PAGE_SIZE bytes in length.

The same cache considerations that apply to pmap_zero_page() apply to pmap_copy_page().

void pmap_page_protect(struct vm_page *pg, vm_prot_t prot)
Lower the permissions for all mappings of the page pg to prot. This function is used by the virtual memory system to implement copy-on-write (called with VM_PROT_READ set in prot) and to revoke all mappings when cleaning a page (called with no bits set in prot). Access permissions must never be added to a page as a result of this call.
bool pmap_clear_modify(struct vm_page *pg)
Clear the “modified” attribute on the page pg.

The pmap_clear_modify() function returns true or false indicating whether or not the “modified” attribute was set on the page before it was cleared.

Note that this function may be provided as a C pre-processor macro.

bool pmap_clear_reference(struct vm_page *pg)
Clear the “referenced” attribute on the page pg.

The pmap_clear_reference() function returns true or false indicating whether or not the “referenced” attribute was set on the page before it was cleared.

Note that this function may be provided as a C pre-processor macro.

bool pmap_is_modified(struct vm_page *pg)
Test whether or not the “modified” attribute is set on page pg.

Note that this function may be provided as a C pre-processor macro.

bool pmap_is_referenced(struct vm_page *pg)
Test whether or not the “referenced” attribute is set on page pg.

Note that this function may be provided as a C pre-processor macro.

paddr_t pmap_phys_address(paddr_t cookie)
Convert a cookie returned by a device mmap() function into a physical address. This function is provided to accommodate systems which have physical address spaces larger than can be directly addressed by the platform's paddr_t type. The existence of this function is highly dubious, and it is expected that this function will be removed from the pmap API in a future release of NetBSD.

Note that this function may be provided as a C pre-processor macro.

OPTIONAL FUNCTIONS

This section describes several optional functions in the pmap API.
vaddr_t pmap_steal_memory(vsize_t size, vaddr_t *vstartp, vaddr_t *vendp)
This function is a bootstrap memory allocator, which may be provided as an alternative to the bootstrap memory allocator used within uvm(9) itself. It is particularly useful on systems which provide for example a direct-mapped memory segment. This function works by stealing pages from the (to be) managed memory pool, which has already been provided to uvm(9) in the vm_physmem[] array. The pages are then mapped, or otherwise made accessible to the kernel, in a machine-dependent way. The memory must be zeroed by pmap_steal_memory(). Note that memory allocated with pmap_steal_memory() will never be freed, and mappings made by pmap_steal_memory() must never be “forgotten”.

Note that pmap_steal_memory() should not be used as a general-purpose early-startup memory allocation routine. It is intended to be used only by the uvm_pageboot_alloc() routine and its supporting routines. If you need to allocate memory before the virtual memory system is initialized, use uvm_pageboot_alloc(). See uvm(9) for more information.

The pmap_steal_memory() function returns the kernel-accessible address of the allocated memory. If no memory can be allocated, or if allocated memory cannot be mapped, the function must panic.

If the pmap_steal_memory() function uses address space from the range provided to uvm(9) by the pmap_virtual_space() call, then pmap_steal_memory() must adjust *vstartp and *vendp upon return.

The pmap_steal_memory() function is enabled by defining the C pre-processor macro PMAP_STEAL_MEMORY in <machine/pmap.h>.

vaddr_t pmap_growkernel(vaddr_t maxkvaddr)
Management of the kernel virtual address space is complicated by the fact that it is not always safe to wait for resources with which to map a kernel virtual address. However, it is not always desirable to pre-allocate all resources necessary to map the entire kernel virtual address space.

The pmap_growkernel() interface is designed to help alleviate this problem. The virtual memory startup code may choose to allocate an initial set of mapping resources (e.g., page tables) and set an internal variable indicating how much kernel virtual address space can be mapped using those initial resources. Then, when the virtual memory system wishes to map something at an address beyond that initial limit, it calls pmap_growkernel() to pre-allocate more sources with which to create the mapping. Note that once additional kernel virtual address space mapping resources have been allocated, they should not be freed; it is likely they will be needed again.

The pmap_growkernel() function returns the new maximum kernel virtual address that can be mapped with the resources it has available. If new resources cannot be allocated, pmap_growkernel() must panic.

The pmap_growkernel() function is enabled by defining the C pre-processor macro PMAP_GROWKERNEL in <machine/pmap.h>.

void pmap_fork(pmap_t src_map, pmap_t dst_map)
Some pmap implementations may need to keep track of other information not directly related to the virtual address space. For example, on the i386 port, the Local Descriptor Table state of a process is associated with the pmap (this is due to the fact that applications manipulate the Local Descriptor Table directly expect it to be logically associated with the virtual memory state of the process).

The pmap_fork() function is provided as a way to associate information from src_map with dst_map when a vmspace is forked. pmap_fork() is called from uvmspace_fork().

The pmap_fork() function is enabled by defining the C pre-processor macro PMAP_FORK in <machine/pmap.h>.

vaddr_t PMAP_MAP_POOLPAGE(paddr_t pa)
This function is used by the pool(9) memory pool manager. Pools allocate backing pages one at a time. This is provided as a means to use hardware features such as a direct-mapped memory segment to map the pages used by the pool(9) allocator. This can lead to better performance by e.g. reducing TLB contention.

PMAP_MAP_POOLPAGE() returns the kernel-accessible address of the page being mapped. It must always succeed.

The use of PMAP_MAP_POOLPAGE() is enabled by defining it as a C pre-processor macro in <machine/pmap.h>. If PMAP_MAP_POOLPAGE() is defined, PMAP_UNMAP_POOLPAGE() must also be defined.

The following is an example of how to define PMAP_MAP_POOLPAGE():

#define PMAP_MAP_POOLPAGE(pa)   MIPS_PHYS_TO_KSEG0((pa))

This takes the physical address of a page and returns the KSEG0 address of that page on a MIPS processor.

paddr_t PMAP_UNMAP_POOLPAGE(vaddr_t va)
This function is the inverse of PMAP_MAP_POOLPAGE().

PMAP_UNMAP_POOLPAGE() returns the physical address of the page corresponding to the provided kernel-accessible address.

The use of PMAP_UNMAP_POOLPAGE() is enabled by defining it as a C pre-processor macro in <machine/pmap.h>. If PMAP_UNMAP_POOLPAGE() is defined, PMAP_MAP_POOLPAGE() must also be defined.

The following is an example of how to define PMAP_UNMAP_POOLPAGE():

#define PMAP_UNMAP_POOLPAGE(pa) MIPS_KSEG0_TO_PHYS((va))

This takes the KSEG0 address of a previously-mapped pool page and returns the physical address of that page on a MIPS processor.

void PMAP_PREFER(vaddr_t hint, vaddr_t *vap, vsize_t sz, int td)
This function is used by uvm_map(9) to adjust a virtual address being allocated in order to avoid cache alias problems. If necessary, the virtual address pointed by vap will be advanced. hint is an object offset which will be mapped into the resulting virtual address, and sz is size of the object. td indicates if the machine dependent pmap uses the topdown VM.

The use of PMAP_PREFER() is enabled by defining it as a C pre-processor macro in <machine/pmap.h>.

void pmap_procwr(struct proc *p, vaddr_t va, vsize_t size)
Synchronize CPU instruction caches of the specified range. The address space is designated by p. This function is typically used to flush instruction caches after code modification.

The use of pmap_procwr() is enabled by defining a C pre-processor macro PMAP_NEED_PROCWR in <machine/pmap.h>.

SEE ALSO

uvm(9)

HISTORY

The pmap module was originally part of the design of the virtual memory system in the Mach Operating System. The goal was to provide a clean separation between the machine-independent and the machine-dependent portions of the virtual memory system, in stark contrast to the original 3BSD virtual memory system, which was specific to the VAX.

Between 4.3BSD and 4.4BSD, the Mach virtual memory system, including the pmap API, was ported to BSD and included in the 4.4BSD release.

NetBSD inherited the BSD version of the Mach virtual memory system. NetBSD 1.4 was the first NetBSD release with the new uvm(9) virtual memory system, which included several changes to the pmap API. Since the introduction of uvm(9), the pmap API has evolved further.

AUTHORS

The original Mach VAX pmap module was written by Avadis Tevanian, Jr. and Michael Wayne Young.

Mike Hibler did the integration of the Mach virtual memory system into 4.4BSD and implemented a pmap module for the Motorola 68020+68851/68030/68040.

The pmap API as it exists in NetBSD is derived from 4.4BSD, and has been modified by Chuck Cranor, Charles M. Hannum, Chuck Silvers, Wolfgang Solfrank, Bill Sommerfeld, and Jason R. Thorpe.

The author of this document is Jason R. Thorpe <thorpej@NetBSD.org>.

BUGS

The use and definition of pmap_activate() and pmap_deactivate() needs to be reexamined.

The use of pmap_copy() needs to be reexamined. Empirical evidence suggests that performance of the system suffers when pmap_copy() actually performs its defined function. This is largely due to the fact that the copy of the virtual-to-physical mappings is wasted if the process calls execve(2) after fork(2). For this reason, it is recommended that pmap implementations leave the body of the pmap_copy() function empty for now.

November 4, 2009 NetBSD 6.1