393 Final flashcards

by jvadair

arrow_back_ios_new

composite design pattern

Compose objects into tree structures to represent whole-part hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly. Recursive composition "Directories contain entries, each of which could be a directory." 1-to-many "has a" up the "is a" hierarchy

arrow_forward_ios

arrow_back_ios_new

arrow_forward_ios

school Study

share Share

more_horiz

expand_more    Scroll for list view...    expand_more

116 cards

1

composite design pattern

Compose objects into tree structures to represent whole-part hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly. Recursive composition "Directories contain entries, each of which could be a directory." 1-to-many "has a" up the "is a" hierarchy

2

strategy design pattern

Defines a family of algorithms, encapsulates each one, and makes them interchangeable

3

decorator design pattern

Attaches additional responsibilities to an object dynamically. Decorators provide a flexible alternative to subclassing for extending functionality.

4

abstract factory design pattern

Provide an interface for creating families of related or dependent objects without specifying their concrete classes.

5

bridge design pattern

separates an abstract class hierarchy from its implementation so the two can vary independently

6

command design pattern

Encapsulate a request as an object, thereby letting you parameterize other objects with different requests, queue or log requests, and support undoable operations.

7

iterator design pattern

a design pattern that provides a way to access the elements of an aggregate object sequentially without exposing its underlying representation; assumes that the underlying collection is not changed or modified while the traversal is occurring

8

visitor design pattern

concrete subclasses of visitor perform specific analyses most appropriate when you want to do a variety of things to objects having a stable class structure

9

adapter design pattern

- convert the interface of a class into another interface clients expect - wrap an existing class with a new interface - impedance match an old component to a new system

10

proxy design pattern

provides a surrogate or placeholder for another object to control or facilitate access to it - useful for when you need to modify the behavior of a class for some of its clients without changing the interface

11

internal-facing quality

developer facing - program should be readable, maintainable, etc. -> human code review, static analysis tools and linters, programming idioms and design patterns, local coding standards

12

external-facing quality

customer facing - program should do the right thing -> behave according to a specification, robustness against maintenance mistakes

13

the halting problem

there cannot be a program that will determine which computer programs will halt (or exit) and which programs will go on forever (infinite loop) so we approximate using type systems, linters, static analyzers, etc.

14

software analysis

the systematic examination of a software artifact to determine its properties

15

in the definition of software analysis, what do we mean by systematic?

attempting to be comprehensive, as measured by, as examples: test coverage, inspection checklists, exhaustive model checking

16

in the definition of software analysis, what are the two types of examination?

automated: regression testing, static analysis, dynamic analysis manual: manual testing, inspection, modeling

17

in the definition of software analysis, what is an artifact?

code, system, module, execution trace, test case, design or requirements document

18

dynamic testing

direct execution of code on test data in a controlled environment

19

static inspection

human evaluation of code, design documents (specs and models), modifications

20

dynamic vs static analysis

dynamic: tools extracting data from test runs static: tools reasoning about the program without executing it

21

failure

runtime deviation from required behavior

22

defect, fault, or bug

a flaw in a program that can cause it to fail

23

error

human action that results in a defect or fault

24

correctness

property that program absolutely satisfies functional requirement

25

reliability measure

numerical measure of extent to which behavior of deployed software conforms to requirements ex: frequency of failures

26

synthetic testing

tester or tools generates test data types: functional testing, structural testing, boundary/special values testing, interaction testing, model-based testing

27

functional testing

exercises each specified functional requirement at least once

28

equivalence partition and subdomain testing

divide input domain of program into finite number of subdomains -> inputs in each subdomain are treated similarly -> one or a few test cases are selected from each subdomain disjoint subdomains can be viewed as equivalence classes

29

structural testing

exercises or covers each program element of a certain type -> percent coverage used as one measure of test-completeness

30

operational/field testing

software tested in field (or with inputs captured in field); prerequisite for measuring reliability

31

simulation testing

approximation to field testing; used when field testing is infeasible; requires probabilistic simulation model

32

what are some alternatives to testing?

inspection, program analysis, reliability estimation, program verification (formal verification)

33

inspection

feasible and beneficial, but inspectors overlook many defects

34

program analysis

effective for finding violations of implicit programming rules

35

reliability estimation

employs statistical sampling methodology, based on field testing

36

program verification (formal verification)

- intended to produce proof of correctness - manual is impractical for large programs - partial automation options: automatic theorem proving or finite-state model checking

37

what are the principal costs of traditional testing?

analysis of specification and program, test data creation/generation, program execution, evaluation of program behavior

38

test oracle

a source to determine an expected result to compare with the actual result of the system under test can be partially automated (full requires correct implementation of requirements, "gold" program)

39

fuzz testing

a software testing technique that deliberately provides invalid, unexpected, or random data as inputs to a computer program does not provide full expected outputs without an automated oracle, may detect only exceptions, assertion failures, crashes, and hangs

40

what are typical testing phases?

unit testing -> integration & subsystem testing -> system testing -> field testing -> acceptance testing -> regression testing pre alpha -> alpha -> beta -> release candidate (rc) -> release

41

unit testing

- routine, module, or class tested - analysis and evaluation are simplest - driver and stubs often needed - supported by frameworks like xUnit

42

integration & subsystem testing

- units integrated into subsystem - interactions between units tested - analysis and evaluation are usually harder

43

system testing

- subsystems integrated - analysis and evaluation are usually hardest

44

field testing

- system used in field by ordinary users - users report problems - one or more deployment sites - extended duration (e.g., months)

45

acceptance testing

- testing by customer - basis for accepting software

46

regression testing

- retesting after maintenance - existing tests often reused - "general case" of testing

47

alpha testing

performed by testers who are internal employees of development organization at the developer's site

48

beta testing

performed by customers or end users at location of client or end user of product

49

xunit features/definitions

input (data), oracle (output), comparator (compared actual output against the oracle), discoverer (finds all test cases), runner (chooses which cases to run) test case: piece of code that establishes preconditions, performs an operation, and asserts postconditions test fixture: code to be run before/after each test case

50

advantages of unit testing

- tests features in isolation -> when a test fails it is easier to locate the bug - tests are small -> easier to understand - tests are fast -> can be run frequently

51

when to test?

test each component as soon as it becomes possible

52

control flow graph

depicts potential control flow between program statements or instructions statement, branch, condition, function coverage

53

boundary/special values testing

used in conjunction with functional and structural testing; involves testing with boundary or special values of variables

54

load testing

involves stressing software with high loads

55

interaction testing

testing interactions between variables, objects, events, statements, components, or features feature interaction = integration of two features can modify the behavior of one or both features

56

combinatorial interaction testing (cit)

a black box testing technique that samples input parameters and configurations and combines them systematically calls for testing all t-way interactions (for chosen t) uses combinatorial algorithms to construct covering arrays for n-way interactions

57

data flow/dependence analysis

find evidence of potential causal interactions between program elements; shown in a program dependence graph

58

dependence testing

many testing technique call for covering specific pdg sub-structures (chains of dependencies, pdg subgraphs)

59

object oriented ineteraction testing

exercises object interaction scenarios using mock objects

60

mock object

dummy object that mimics a real object's behavior

61

when should you use a mock object?

• The real object does not yet exist • It has nondeterministic behavior • It is difficult to set up • It has behavior that is hard to trigger • It is slow • It is a user interface • It uses a callback

62

model-based testing

any type of testing guided by a behavioral model limitation: model omissions

63

fault-based testing

involves selecting test cases to reveal a specific kind of programming fault

64

mutation testing

small changes are automatically injected into a program creating mutant versions, tests are run, if a mutant produces different output then it is killed

65

what is the core functionality of AFL++

• Instrumenting the target program to gather coverage information • Generating and mutating test cases • Executing the program with these test cases • Analyzing the execution feedback to guide further fuzzing

66

penetration testing

security oriented testing with goal of reporting found vulnerabilities

67

what are some techniques for revealing omissions from spec and program?

• Boundary/special values testing • Independent testing by application expert • Random testing • Operational (field) testing • Mining code base to discover programming rules and rule violations

68

why are performance bugs "bad" bugs?

they don't usually generate incorrect results/crashes so they are difficult to diagnose ex: system load, hardware configuration, network conditions, user-specific workflows, interactions with other systems

69

profiling

a process to analyze and measure the performance of a program or specific parts of its code

70

tracing

record sequential events (function calls) that occur during the execution of a program about understanding the flow of execution and the behavior of a program

71

what is the difference between validation and verification?

verification: is it built correctly/are there incorrect design choices validation: did we do requirements capture correctly

72

what is included in a software defect report?

information and communications related to addressing a software issue contains severity and priority information, and have a complex nonlinear lifecycle

73

feature request

a potential change to intended purpose (requirements of software)

74

what does triage refer to in defect reporting?

the process of prioritizing defect reports based on severity and urgency how expensive it is to fix a bug versus the cost of not fixing it

75

bug report

provides information about a defect, created by testers, users, or tools

76

what does severity measure in defect reporting?

the degree of impact that a defect has on the development or operation of a system

77

what does defect priority indicate?

the importance or urgency of fixing a defect

78

distributed defect assignment

developers watch the incoming bug report queue and claim defects for themselves

79

centralized defect assignment

one or more people in QA watch the incoming bug report queue and assign reports to a pool of developers

80

what are the two main causes of security bugs?

memory bugs (severe) and semantic bugs

81

fault localization

the task of identifying source code regions implicated in a bug

82

debugger

a software tool that is used to detect the source of program or script errors, by performing step-by-step execution of application code and viewing the content of code variables

83

spectrum-based fault localization tool

uses a dynamic analysis to rank suspicious statements implicated in a fault by comparing the statements covered on failing tests to the statements covered on passing tests++

84

suspiciousness ranking

susp = [ fails/total_fail ] / [ fails/total_fail + passes/total_pass ]

85

profiler

a performance analysis tool that measures the frequency and duration of function calls as a program runs

86

flat profile

computes the average call times for functions but does not break times down based on context

87

call-graph profile

computes call times for functions and also the call-chains involved

88

event-based profiling

register a function that will get called whenever the target program calls a method, loads a class, allocates an object, etc. (hooks)

89

sampling analysis pros/cons?

pros: simple/cheap, no big slowdown cons: can miss periodic behavior, high error rate

90

rice's theorem

every static analysis is necessarily incomplete, unsound, undecidable, or a combination thereof

91

what defects is static analysis good for?

security, memory safety, resource leaks

92

what are some limitations of pattern-based static analysis?

- analysis must produce zero false positives - analysis needs to be really fast - you can't just turn on one particular check

93

linters

shallow syntax analysis for enforcing code styles and formatting good for improving maintainability

94

pattern-based bug detectors

simple syntax or api-based rules for identifying common programming mistakes

95

type-annotation valdiators

check conformance to user-defined types

96

data-flow analysis/abstract interpretatoin

deep program analysis to find complex error conditions issue is that they are costly

97

what are some limitations of type based static analysis?

- can only analyze code that is annotated - only considers the signature and annotations of methods - can't handle dynamically generated code well - can produce false positives

98

technical debts

workarounds or sub-optimal implementations that incur long-term costs (performance, maintenance, etc)

99

why is technical debt not synonymous with bad internal quality?

a system can have messy internals and low debt (stable legacy code nobody touches), or clean internals and high debt (elegant code built on a fundamentally wrong abstraction that must soon change) technical debt mainly impacts maintainability and evolvability

100

true or false: all systems have technical debt

true

101

spaghetti code

code with tangled control flow and poorly organized logic so it is hard to follow, debug, or modify

102

glue code

ad hoc code written mainly to connect different modules, libraries, or services without clean abstractions

103

god class/god object

class that does too many things and holds too many responsibility

104

how is ml different from ai?

ml is more data-focused, experimental, and algorithmic, while se is more structured or process-oriented evaluation: sl -> functional correctness, ml -> accuracy

105

how can ml be useful in se?

automation and reducing manual efforts, support in problem-solving and decision making writing tests, refactoring code, understanding code

106

what are some limitations of ml?

incorrect/non-optimal code, security, overreliance, high latency

107

what additional complexities/concerns does ml add to a system?

ml-enabled software is often under-specified, data-driven (data quality/quantity), uncertain, and opaque

108

what are some strategies to assure safety with ml models?

human in the loop, undoable actions, guardrails, mistake detection/recovery, containment and isolation

109

fault tree analysis

top down systematic method used to identify and analyze potential causes of system failures (fault tree diagram) used to understand how component failures can lead to system-wide failures

110

probably approximately correct (PAC)

a framework for analyzing ml algorithms: hypothesis is correct with a high probability, approximately correct with an error less than a specified threshold, and correct in that it correctly classifies new samples

111

what are the three questions to promote human flourishing?

1. Does my software respect the humanity of the users? 2. Does my software amplify positive behavior, or negative behavior for users and society at large? 3. Will my software's quality impact the humanity of others?

112

what are the advantages of open source?

transparency, crowd-source bug reports/fixes, vulnerabilities found faster, community/adoption

113

what is dependency pinning vs floating?

depending on a specific version vs pulling the latest available version with each build

114

transitive dependencies

packages depend on other packages

115

diamond dependencies

multiple intermediate dependencies have the same transitive dependency

116

what are some resolutions to the diamond problem?

duplicate the packages, use a global list with one version for each, use newest or oldest version