OS MT3

by jvadair

arrow_back_ios_new

process control block

information stored on a context switch, including pc, registers, files, etc., most of what is in /proc/pid

arrow_forward_ios

arrow_back_ios_new

arrow_forward_ios

school Study

share Share

more_horiz

expand_more    Scroll for list view...    expand_more

174 cards

1

process control block

information stored on a context switch, including pc, registers, files, etc., most of what is in /proc/pid

2

system process

ambiguous, typically a process associated with the proper functioning of the system (must a server be running sshd? must a webserver be running apached or equivalent?)

3

nice

a value from +19 to -20 that can be applied to the static PR value to create a dynamic PR value; 139 (+19) is the least priority job in the system

4

priority boosting

where a job gets a temporary boost in PR value by the OS (nice is a user-initiated deviation)

5

phase change in program

when the process running the program enters a phase with different resource needs from earlier

6

Ousterhout's Law

avoid voo-doo constants

7

proportional share scheduler

scheduler starts with idea of equal or appropriately unequal divisions of latency into timeslices

8

lottery scheduler

markov simulation of proportional share scheduler, approximates based on sampling asymptotics

9

ticket mechanism

introduces a virtual economy to determine resource division

10

stride scheduler

deterministic version of lottery scheduler, uses different sized deterministic steps to achieve proportions

11

CFS

completely fair scheduler using vruntimes to determine who runs next (note unrelated to FCFS)

12

vruntime

actual cpu time weighted nonlinearly by PR (note vrumtime weight called load, unrelated to system runtime queue length avgs)

13

sched_latency

CFS target time to service all tasks in RR fashion, though it never uses RR to select next!

14

min_granularity

CFS min time on cpu before checking for preemption by lower vruntime in ready queue

15

sysctl

linux cli for viewing/changing system parameters, esp kernel as root

16

red-black tree

complex O(nlogn) priority queue for O(logn) or O(1) selection of lowest vruntime

17

top

the default text-based system monitor in linux, H for thread-view, f to select fields, V for parent-view, d to change update time

18

PR

the static priority shown per process in top, 120 shown as 20, negative values have high priority, rt highest, different policy

19

NI

the nice value shown per process in top output

20

TGID

thread group id in top output

21

CODE

size of executable per process in top output

22

nDRT

dirty pages associated per process in top output, needs write-back to disk

23

cache

(1) copy of data held temporarily in fast access area; (2) place to store data for faster access, temporarily, but possibly uniquely, ie, not a copy because of write-back policy

24

simultaneous cache lookup

sometimes can be done across multiple levels of cache

25

cache hit

reference found entirely in cache, shown around 98%, depending on temporal and spatial locality

26

LRU

least-recently used replacement policy (for cache, or DRAM)

27

pseudo-LRU

approximate LRU, often clock or other circular queue with dynamic next pointer

28

trapping to OS

typically from user mode to kernel mode to change state in a nontrivial or privileged way

29

k-way associative cache

each reference has k possible cache lines where it could be found/placed

30

fully-associative cache

a reference could be in any of the n available cache lines for the set

31

cache bank

segregation of cache into parts for more size, usually using higher bits for segregation, adds complexity and power drain

32

cache read miss

cache miss on an attempted read; must attempt read from slower storage; usually new data inserted into cache

33

cache write miss

cache miss on an attempted write; write directly to cache (possibly after replacement) then write back, or write through to slower storage and insert into cache

34

cache coherence

part of the hardware memory model to keep caches sharing lines "simultaneously" updated on writes, using propagation

35

transaction serialization

part of the hardware memory model to enforce certin kinds of correctness, esp wrt read and write interleaving

36

temporal locality

when one "piece of data" is referenced repeatedly within a small time interval; granularity could mean a byte, a line, a page/frame, or set of pages/frames

37

spatial locality

when data referenced is located "close to" data "recently" referrenced; granularity could mean next byte, next contiguous page, next disk block, usually prefetchable

38

cache affinity / cpu affinity

often called the latter but meaning the former; preference for staying on a cpu with the same cache so data populating the cache can exploit temporal locality (and spatial if prefetched well)

39

capacity of the cache

number of lines, or more vaguely, its tendency to have cache hits across larger times, larger distances if prefetching

40

access pattern

the pattern of data references by a process in a phase

41

load balancing

spreading load across substitutable (but possibly not identical) resources, esp cpus (including performance and efficient cores in hybrid/asymmetric fashion)

42

per-CPU ready queue

no shared ready queues for cpu, at least across a set of cpus; opposite of shared ready queue

43

cache flushing

removing data from a cache without on-demand replacement; usually full flush on context switch usually for security, by hw or os

44

cache invalidation

like cache flushing but perhaps just a line or few, not necessarily whole cache flush

45

warm in cache

sometimes used metaphor for good cache hit behavior for a process

46

spectre/meltdown

bad cve that involved cache snooping during speculative execution / misprediction in hw

47

linux o(1) scheduler

preemptive version of a mlfq using two arrays of linked lists and switching the main pointer when one current array empties (actually only achieves RR for the lowest prio class since preemptive)

48

peek and poke

one way to induce behaviors when addresses are known, or programs loaded at knowable base locations and contiguous

49

external fragmentation

general term for small memory holes that cannot be coalesced because non-adjacent

50

vAddr

virtual address for program relocation or virtualization, not to be confused with virtual memory in re:swapping/paging to disk, basically the numbers in a process memory map

51

ASID

address space id, basically PID, but possbily some ASIDS are PID-less

52

VIRT

virtual address space charged per process in top output, often lazy allocation overcharge

53

RES

physical resident DRAM charged per process in top output, real memory use, modulo shared (SHR) memory

54

MMU

hardware memory management unit, off cpu, originally calculating offsets, now including TLB and other duties

55

TLB

translation lookaside buffer, contains cached translations from ASIDxvAddr to PFN, may have its own levels and banks

56

PFN

physical frame number

57

frame

quantum of actual DRAM, usually 4kB (4096 bytes)

58

segment

usually vAddr range, contiguous in eyes of process (but usually not contiguous in physical memory), possibly arbitrary length; in linux, a multiple of page size

59

memory hole

where segments are not adjacent, evidence of external fragmentation

60

page

fixed size (usually 4kB) of contiguous vAddrs; also huge pages, 2MB, 4MB, or more, rare birds often contiguous in PFNs

61

internal fragmentation

general term for wasted memory because minimum size of page or frame (or block, ...)

62

page table

how the OS keeps track of vAddr to PFN translations when TLB miss, often ridiculously sparse, hence use HPT or IPT, usually per process

63

PTE

a page table entry, typically one entry per segment, not per individual page; note larger or huge pages use fewer PTEs

64

CAM hardware

content-addressable memory, basically a fast associative array/hash table/dictionary used for TLB

65

TLB shootdown

basically a full cache flush or selective cache invalidation applied to the TLB

66

TLB tag

basically the ASID

67

heap allocation strategy

per malloc request policy for finding space requested: best fit, worst, first, next, near, last freed, random; actually uses hybrid with bins of sizes

68

free list

idea that if there were a list of contiguous free bytes after malloc/free, one could traverse that list as part of heap allocation strategy; may also used in any contiguous memory allocation mechanism; used even if data structure is not a linked list

69

append / prepend

in the context of free list, whether freed blocks of contiguous bytes go on front or back of linked list

70

coalescing

joining adjacent free ranges to create a larger range from their union

71

unsorted list

recently freed blocks of contiguous bytes not yet placed in sorted bins, for immediate reallocation of same size

72

thread safety

ensuring that parallel accesses by two or more threads maintains correct state (usually consistency or serialization) for next events (reads, writes, etc.)

73

slabs

pools of fixed sized objects used/reused by the kernel to avoid internatl frag on pages/frames, may take more than 1 page

74

slab object

a fixed sized object in a slab; objects per slab determined in advance

75

buddy allocation

compromise between coalescing time and internal frag, no external frag, using contiguous pages in powers of two and coalescing only with buddy;/proc/buddyinfo shows buddy applied to physical memory (even though contiguity is less of an issue with dram)

76

fast bin

contiguous memory recently released, not coalesced, assuming another malloc might want same or similar size, like a user-level slab

77

HPT

hierarchical page table where pointers point to subtables (actually higher order bits are used for direct access), which themselves point to subtables; the point being that most of those subtables are empty, hence saving space by omitting them entirely; requires k references to find translation for height k

78

valid bit

often used to indicate page is in memory in a page table entry, just as a dirty bit indicates writeback needed

79

bitmap

compressed data structure for array of boolean, which usually use a byte for boolean, so 8:1 compression and direct addressing

80

PTBR

page table base register, allows quick indexing with offsets added to base

81

PTLR

page table length register, at least for each table in a hierarchy, allows correctness checking any offset indexing

82

TLB miss

reference requires OS to use sw to find page translation in HPT or IPT, which can be done in parallel; associative caches can be searched in parallel and often are; L1, L2, L3 can be searched in parallel but often are NOT (power/complexity of routing issue)

83

IPT

inverted page table, software hash for page table translation

84

cache conscious layout

improving spatial locality so prefetching lines is improved (see also chasing pointers on disk)

85

swap space

allowing part of disk to act as virtual memory

86

swap file

one way to implement swap space esp in microsoft OS, see also swap partition (linux)

87

page in/page out

transferring data from memory to swap space on disk, or vice versa

88

memory pressure

vague idea of rate of memory consumption, see also high and low watermark

89

swappiness

linux kernel parameter for tendency to swap out before 100% of memory used, 0-100

90

page fault

when a PTE does not find a page in memory (maybe in swap space, maybe needs to be created, maybe file on disk mapped in memory)

91

nMaj, vMaj

major page faults, number or delta since last update of top, required access of secondary memory (disk)

92

nMin, vMin

minor page fautlt, number or delta, did not require disk i/o, served by disk cache (or swap cache)

93

disk cache

memory that is copy of what is on disk but still in memory, i.e., disk-backed non-dirty frames, adds to free mem to show available

94

swap cache

memory that is copy of what is on disk specifically in swap space; i.e., it should be in memory but was swapped out to disk but a copy is still in memory

95

buffer

often reported in linux with disk cache, but different purpose, for buffering writes esp streaming i/o

96

mlocked

memory locked in memory that cannot be swapped out

97

kswapd

kernel daemon responsible for swap out

98

page replacement algorithm or policy

strategy for deciding victim page/frame when paging in and insufficient free memory

99

vm.page-cluster

power of 2 for max number of contiguous pages swapped in as pre-fetch when swapping in a page

100

clustering

may also refer to swap out, not just swap in

101

virtual memory

ambiguous term for using swap area on disk for pages/frames mapped to secondary memory (NOT vAddr idea)

102

extended memory

marketing term for swap space

103

VMS

old DEC OS that was proud of its virtual memory subsystem

104

appliance OS

lower-functionality OS than general-purpose, aimed at controlling a specific kind of system

105

incrementalism

philosophy of design where new is based heavily on prior to avoid massive unforseen failures

106

thread-specific stack

stack area for a thread visible in address space of threads sharing thread group

107

thread-local storage

area adjacent to thread-specific stack where global variables get localized to avoid threads colliding making routines that were not intended for multi-threading MT-safe

108

lightweight process clone

pthread_create

109

lpthread

the library required at the command line with gcc or cc when using a thread library

110

data race

when nondeterminism results because of timing of thread execution that accesses shared data, specifically involving writes

111

mutex

one user at a time for a resource, mutual exclusion, usually a mechanism for creating mutual exclusion, if not guaranteeing it

112

spinlock

a lock used to achieve mutex, where threads spin wait on cpu to acquire the lock once freed/released

113

semaphore

a counter that locks a resource that is shareable once the limit is reached

114

atomic operation

usually guaranteed by hardware, a multi-step operation that cannot be interrupted by another thread or process once started

115

barrier

a software declaration that memory or processes need to be synchronized at some point, usually with waits issued by the compiler

116

advisory lock

a lock that requires a thread to check whether the lock is held voluntarily (linux)

117

mandatory lock

a lock that is enforced by the OS (microsoft)

118

lock granularity

the size of the data locked, usually determining the amount of potential parallelism

119

spin wait

also wait spin, busy wait, using cpu to poll for availability of a lock or flag, but can be paused to shorten wasted timeslices

120

test and set

one kind of atomic operation often provided by hardware

121

compare and swap

another kind of atomic operation

122

Peterson's Algorithm

turn-yielding provably correct way of achieving mutex in software with three values (two variables, one an array with two values), if there is sequential consistency

123

memory consistency model

memory operation discipline of the hardware (and possibly compiler), especially wrt cache read/write reorderings or cache coherence latency

124

TSO

total store order, better than PSO but not as good as sequential consistency, as memory consistency models go

125

Lamport's Bakery Algorithm

like Peterson's algorithm but uses a ticket numbering idea

126

hinting

an idea where the programmer hints to the OS, e.g., that it is spin-waiting, so a pause or timeslice early expiry is possible

127

Futex

fast userspace mutex, sometimes does not require the kernel's intervention (when lock is free, fastpath), did not have the bug that pthread_mutex_lock had

128

Dahm lock

two phase, spins first, then sleeps and waits for notification (interrupts sleep)

129

MCS lock

threads spin on local variable and notify next up

130

priority inversion

where a lock is held by a lower prio process and a higher prio process needs that locked resource to proceed

131

priority inheritance

when there is priority inversion, the lower prio process gets the priority of the highest prio process that is waiting for the lock to be released

132

random boosting

microsoft way of solving priority inversion, randomly boosting priorty of processes when locks are held (and there are processes waiting)

133

deadlock

when there is a circular wait of any length and progress cannot be made by any thread/process in the circle

134

Coffman conditions for deadlock

mutex, holding, no preemption, circular wait

135

livelock

vague idea of processes not technically deadlocked but not making progress, threads not blocked but not happy, typically cycling through states

136

bankers algorithm

in 1d, require before giving a resource (e.g., a lock to a thread) that the recipient be able to complete and free that resource; in k-d, must simulate possible n-thread allocation orders and find at least one sequence of execution that leads to release of resource

137

Dining Philosophers' Problem

simple illustration of deadlock-prone multi-processing with n threads and n pairwise shared resources

138

ordered pickup

one solution to avoiding deadlock in Dining Philosophers'

139

approximate counter

one illustration of delayed global accuracy so local counts can be separated for parallelism

140

lock overhead

penalty for synchronization that prevents linear speedup with parallel processing

141

hand over hand locking

locking nodes within a linked list rather than the whole linked list

142

concurrent queue

another illustration of parallelizing data structure, head and tail have separate locks

143

concurrent hash table

another illustration of parallelizing data structure, lock each collision chain for a hash bucket

144

Knuth's Law

premature optimization is the root of all evil

145

RAID

redundant array of independent disks, storage control that stripes for performance, mirrors for redundancy, or adds parity for error detection AND correction (RAID 0 striping, RAID 1 mirroring)

146

CRC

cyclic redundancy check, just a fancy name for checksum that adds numeric representation of data chunks, then stores remainder mod k

147

inode

in linux, where we store metadata, possibly even a checksum

148

TMR

triple modular redundancy, a voting method for robust operation against 1-failure

149

n-MR

n (odd n?) generalization of TMR to survive floor(n/2)-failure

150

ZFS

Zettabyte file system which writes anew then switches pointer to data block rather than making changes in place

151

bit rot

what disk scrubbing by checking data checksums tries to avoid

152

authentication

allowing access to a resource based on knowing, having, or being; also behaving, situating, signed on elsewhere, system having

153

MFA

multi-factor authentication

154

hashed passwords

stored passwords not in plain text but using a hash function as cipher

155

brute force

breaking a password by trying using enumeration of all possible

156

rainbow table

if the hash function is known, pre-computing the hashes to discover common passwords easily from their hashed form (e.g., stealing the password file in hashed form)

157

salt

added to user passwords to extend the length, unique to each password instance, combats rainbow tables, naive users

158

pepper

added to user passwords to extend the length, stored separately, often same per user, combats theft of password hashes with salts, naive sysadmins

159

password exposure

times and places where password must be revealed for authentication, hence available for leak, see progressive revelation

160

perimeter authentication

idea that once authenticated, movement within the perimeter requires no further authentication, vs continuous authentication, layered defense, defense in depth

161

attack surface

places where a system is vulnerable to attack by malign actor (independent of other failures that are non-adversarial)

162

security by obscurity

hiding access in such a way that once the path or secret is known, there is no further or future protection

163

principle of least privilege

idea that users and processes should have no more privileges than necessary to perform their intended task(s)

164

MTD

moving target defense, changing locations, names, or paths to access so those in the know retain access but not those who discover but do not update

165

SDN

software defined network, one way to implement MTD for servers

166

RBAC

role-based authentication vs ABAC, attribute-based

167

HistBAC

history-based authentication

168

PathBAC

path-based authentication

169

virtual machine

machine that is not bare-metal, require emulation on bare-metal machine (BMM); scalable, isolated, secure

170

hypervisor

software that runs a guest OS as virtual machine; type-1: no host OS; type-2: host OS you can use independently; plays on supervisor=monitor=OS

171

Xen

example popular type 1 hypervisor

172

KVM

example popular type 2 hypervisor

173

paravirtualization

allowing guest OS to communicate directly with host OS or BMM drivers, esp for i/o performance using hypercalls, bypassing emulation

174

container

considered alternative to hypervisor but is really a packaging of software, not an isolation and emulation; less secure, more like virtual backend web services with minimal dependencies specified in config files