This is the effort of The Boss Academy to provide high quality study materials & model question
papers for all competitive Nursing exams. Utilize our small effort & share
to others to brighten Nursing profession. And we welcome your most valuable suggestions
to improve our services & help us to do it best way to spread knowledge,
skills & power.
Quantitative
Research methods
1. What are the major classes of
quantitative design?
1. Experimental (and Quasi-experimental)
2. Non-experimental
2. What are the 3 criteria of causality?
1. Preceded the effect in time
2. Association between the cause and effect
3. Relationship cannot be due to the influence of a
third variable or confounder
3. What are the 3 aspects of experimental
design?
1. Manipulation
2. Control/comparison
3. Randomization
4. What are the different experimental
designs?
1. Randomized controlled trial (POSTTEST ONLY)
2. Randomized controlled trial (PRETEST-POSTTEST)
3. Cross-Over Design
5. What is a cross-over experimental
design?
Sample:
1. Treatment 1 ➡️ washout ➡️ Treatment 2
2. Treatment 2 ➡️ washout ➡️ Treatment 1
6. What is a pretest-posttest randomized
controlled trial experimental design?
Measures outcomes before and after experimental and
control interventions
7. What are limitations of experimental
designs?
1. Not everything can be manipulated
2. Hawthorne Effect
3. Blinding not always possible
4. May be unethical to withhold care
8. What are quasi-experimental designs?
Involves a manipulation but lacks either randomization
or control group
9. What are the 2 categories of
quasi-experimental designs?
1. Non-equivalent control group design
(Intervention group compared to nonrandomized control
group)
2. Within-subjects designs
(One group studied before and after intervention)
10. What are strengths of
quasi-experimental designs?
More feasible as compared to a true experiment
11. What are limitations of
quasi-experimental designs?
1. More difficult to infer causality
2. Rival explanations for results
12. What are categories of
non-experimental designs?
1. Correlational cause-probing research
2. Descriptive correlational designs
3. Univariate descriptive studies
4. Cohort studies
5. Case-control study
13. What is a cohort study?
Investigator identified exposed and none posed groups
(cohorts) and follows them forward in time.
- Useful when harmful outcomes occur infrequently
***Exposed and unexposed may begin with different
risks of target outcomes (CONFOUNDING)
14 What are strengths of non-experimental
designs?.
1. Efficient way to collect large amounts of data when
intervention/randomization no possible
2. Does not require artificial provision of exposure
3. Treatments not withheld
15. What are limitations of
non-experimental designs?
1. Rival explanations for results
2. Limited ability to infer causality
16. Cross-sectional
Data collection at one time point, or more than one in
close succession
17. Longitudinal
1. Data collected at multiple time points over
days/months/years
2. Better at showing patterns of changed and at
clarifying whether a chase occurred before an effect
3. ATTRITION = loss of participants over time
18. Which is NOT another term for
randomization?
A. Random sampling
B. Random allocation
C. Random assignment
D. None of the above
A. Random sampling
19. What are the 4 main aspects of
Validity?
1. Statistical Conclusion Validity
2. Internal Validity
3. Construct Validity
4. External Validity
20. Validity
The degree to which inferences made in a study are
accurate and well-founded
21. What are ways of controlling
extraneous/confounding variables?
1. Constancy of conditions
2. Formal protocol to enhance intervention fidelity
3. Randomization
4. Homogeneity (restricting sample)
5. Matching
6. Statistical control (ex. Analysis of covariance)
22. Statistical conclusion validity
The ability to detect true relationships statistically
23. What are threats to statistical
conclusion validity?
1. Low statistical power (ex. low sample size)
2. Weakly defined “cause” - independent variable
poorly constructed
3. Low implementation fidelity
4. Poor intervention adherence
24. Internal validity
Extent to which it can be inferred that the
independent variable caused or influenced the dependent variable
25. What are threats to internal validity?
1. Temporal ambiguity
2. Selection threat - bias arising from preexisting
differences between groups
3. History - other events co-ocurring with the causal
factor
4. Maturation - processes that result from passage of
time
5. Mortality/attrition - differential loss from groups
26. Construct validity
Key constructs are adequately captured in the study
and thus the evidence in a study supports inferences about the constructs that
are intended to represent.
27. What are threats to construct
validity?
1. Poor construct validity of measurement tools
2. Reactivity to the study situation (Hawthorne
Effect)
3. Researcher expectancies
4. Novelty effect
5. Compensatory effects
6. Treatment diffusion or contamination
28. External validity
The extent to which it can be inferred that the
relationships observed in a study hold true in other samples or settings (ie.
generalizability)
29. What are threats to external validity?
1. Sample that is mom-representative of population
2. Intervention that is difficult to replicate
3. Artificiality of research environment
30. Open-ended questions
1. Allows for more in-depth data
2. Analysis can be difficult and time-consuming
31. Closed-ended questions
1. Greater privacy
2. Less likely to go unanswered
32. Composite psychosocial scales
Used to make fine quantitative discriminations among
people
With different attitudes, perceptions, or needs
33. Likert Scales
1. Consist of several declarative statements (items)
expressing viewpoints
2. Responses are on ah agree/disagree continuum
(usually 5 to 7)
3. Responses to items are summed to compute a total
scale score (SUMMATED RATING SCALE)
34. Observation in quantitative studies
1. Structured observations of pre-specifies units (ex.
Behaviours, actions, events)
2. Structures in what to observe, how long, and how to
record
Methods:
1. Category systems
2. Checklists
35. Bromage Score
Observational rating on a descriptive continuum to
test degree of motor block after epidural
36. What are disadvantages of
observations?
1. REACTIVITY = Behaviours May be altered by awareness
of being observed
2. Observer bias
3. Resources required to gain entry into setting
37. In vivo measurements
Performed directly within or on living organisms (ex.
Blood pressure)
38. In vitro measurements
Performed outside the organism’s body (ex. Urinalysis)
39. What are advantages of biophysical
measures?
1. Precision
2. Objective
3. Validity
40. What are disadvantages of biophysical
measures?
1. Resources required
2. Many factors may affect variability
4. Ethical responsibilities
41. What are the 2 types of biophysical
measures?
1. In vivo
2. In vitro
42. Errors of Measurement
Obtained score = True score +/- Error
= Signal +/- Noise
43. Reliability
The extent to which scores are free from measurement
errors.
Reliability is consistency of measure, validity is
accuracy of measure.
44. Reliability coefficients
0.00 to 1.00
Unsatisfactory < 0.70
Desirable >= 0.80
45. What are the 3 methods of assessing
reliability?
1. Test-Retest Reliability
2. Internal Consistency
3. Interrater Reliability
46. Test-Retest Reliability
Administration of the same measure to the same people
on two occasions. Check correlation of test scores with retest scores.
47. Internal Consistency
(Cronbach’s a)
Consistency across items in a composite scale.
Instrument is administered on one occasion and the relationship between items
tested.
48. Interrater Reliability
Similarity between measurements of multiple
observers/raters using the same instrument.
49. Instrument Validity
The degree to which an instrument measures what it is
supposed to measure
50. What are the 4 aspects of instrument
validity?
1. Face validity
2. Content validity
3. Criterion-related validity
4. Construct validity
51. Face validity
Refers to whether the instrument looks as though it is
measuring the appropriate construct. Based on judgment, no objective criteria
for assessment.
52. Content Validity
The degree to which an instrument has an appropriate
sample of items for the construct being measured.
53. Content Validity Index (CVI)
- Expert evaluation
- Quantitative measure
- Proportion of items that are highly relevant on a
numerical scale
Desired >= 0.8
54. Criterion-Related Validity
The degree to which scores on an instrument are a good
reflection of a “gold standard” criterion for the same construct.
55. How is Criterion-related Validity
evaluated?
1. Concurrent Validity = correlated with external
criterion, measured at the same time
2. Predictive Validity = correlates with external
criterion, measured at a future point in time
56. Construct validity
Degree evidence captures the construct of interest.
Especially useful for when there is no “gold standard” comparison.
57. How is construct validity evaluated?
Known-groups (Discrimination) Validity
1. Convergent Validity = test correlates with other
measures of same construct
2. Divergent Validity = test DOES NOT correlate with
measures of other/different constructs
58. How are measurements have both
Reliability and Validity?
Reliability is necessary but not sufficient for
validity.
Together the Obtained Score = True Score
59. Statistical Significance
The results from the sample data are unlikely to have
been caused by chance.
60. Clinical Significance
The results have practical importance
61. Confidence Intervals (CI)
Range of values within which a population parameter is
expected to lie.
The narrower the CI, the more precise the estimate of
effect.
62. Relative Risk vs Absolute Risk
Absolute = likelihood event will occur under specific
conditions
Relative = likelihood event with occur in a group
compared to another group with different behaviours, environments, etc
Clinical significance of a given Relative Risk cannot
be made unless you know the Absolute Risk.
63. Number Needed to Treat (NNT)
Number of people that would need to receive the
intervention to prevent one additional bad outcome.
Lower baseline absolute risk ➡️ higher NNT
Higher baseline absolute risk ➡️ lower NNT
64. What are the 5 types of reviews?
1. Systematic Review
2. Meta-Analysis
3. Meta-Synthesis
4. Scoping Review
5. Narrative Review
65. What are the 6 steps in conducting a
Review?
1. Research Question
2. Sampling of Primary Studies
3. Quality Appraisal of Primary Studies
4. Data extraction
5. Data analysis
6. Evidence Synthesis
66. Systematic Review
Rigorous synthesis of research findings in a
particular research question, using systematic sampling and data collection
procedures and a formal protocol.
Goals:
1. Reduction of bias and random error
2. Transparency
3. Reproducibility and Verifiability
67. Literature search in a systematic
review
1. Detailed and Exhaustive
2. Search strategy shoul reflect focused clinical
question
3. High yield expected
68. Data extraction in a systematic review
1. Collect relevant study information
2. Preferably performed in duplicate
69. Analysis in a systematic review
1. Qualitative synthesis (Narrative description) =
provides overview of the available studies, common results, discrepancies, etc
2. Quantitative synthesis (Meta-analysis) = only
possible if studies are similar enough to be combined statistically
70. Meta-Analysis
1. Statistical process for combining results of
multiple studies
2. Attempts to overcome the problem of reduced
statistical power in small sample sizes by combining results
3. Results often depicted in a Forest Plot (or
Blobbogram)
71. Metasynthesis
Integration and/or comparison of findings from
multiple qualitative studies.
Purpose = generate new knowledge
72. Scoping Review
Determines the general state of knowledge related to a
specific question and locate gaps in the literature.
Broader scope that a systematic review, but follows an
established methodology
73. Narrative Review
1. Depends on authors’ bias
2. Author picks criteria
3. Search any databases, less structures and
comprehensive
4. Methods not usually specified
5. Only narrative summary
6. Can’t replicate review
74. Systematic Review
1. Scientific approach to a review article
2. Criteria determines at outset (a priori)
3. Comprehensive systematic search for relevant
articles; use of systematic strategy
4. Explicit methods of appraisal and synthesis
5. Meta-analysis
6. Reproducible
75. Clinical practice guidelines
development process
1. Establish guideline development group
2. Develop practice recommendations
3. External review
4. Monitoring and Updating
76. What are the 6 Levels of Evidence?
Ia. Meta-analysis or Systematic reviews of randomized
controlled trials
Ib. At least one randomized controlled trial
IIa. At least one well-designed controlled study
without randomization
IIb. At least one other type of well-designed
quasi-experimental study without randomization
III. Well-designed non-experimental descriptive
studies, such as comparative or correlation
IV. Expert committee reports or opinions
77. What are types of clinical practice
guideline developers?
1. Government agencies (ie. Ontario)
2. Professional Associations (ie. RNAO)
3. Disease or Population-Specific Organizations (ie.
Heart&Stroke)
4. International Organizations (ie. WHO)
5. Other Organizations (ie. CAMH)
78. What are challenges with Knowledge
Transmission (KT)?
1. Using plethora of terms to describe KT
2. KT engages researchers and knowledge users with
different perspectives
3. KT involves a complex set of interactions
4. Successful KT involves practice chance to change
outcomes
79. Ethical imperative for Knowledge
Transmission (KT)
1. (1/3) of patients DO NOT get treatments of proven
effectiveness
2. (3/4) of patients do not have info they need for
decision making
3. (1/4) of patients get care that is NOT NEEDED or
Potentially harmful
4. (1/2) of physicians DO NOT have evidence they need
for decision making
80. Diffusion of innovations
(Rogers,1962,2003)
Influential theory that generally assumed to represent
or form theoretical foundation of research utilization or KT. Explains the
process and spread of an innovation within a social system.
81. What are the 4 main factors that
influence diffusion?
1. Innovation
2. Communication (method to inform others)
3. Time (from first knowledge to acceptance/rejection)
4. Social system
82. What are the 5 steps in the process of
innovation adoption?
1. Knowledge
2. Persuasion
3. Decision
4. Implementation
5. Confirmation
83. PEST Analysis
Societal Characteristics
1. Political
2. Economic
3. Socio Cultural
4. Technological
84. KT (implementation) Strategies
1. Reminders
2. Educational (written) Materials
3. Educational Outreach
4. Audit & Feedback
85. What is a Tailored Intervention?
Interventions planned after investigating factors that
explain current practice and reasons for resistance to change.
- Barriers identified through observation, focus
groups, interview, surveys
86. A theoretical integration of
qualitative findings is known as a:
Meta-synthesis
87. Can purposive sampling be used in
quantitative research?
Yes
88. What is a Type I statistical error?
The rejection of a null hypothesis when it should not
be rejected
(False positive)
Risk is controlled by the level of significance
(alpha) that is < 0.5 or 0.1
89. Type II Statistical Error
Failure to reject the null hypothesis when it should
be rejected.
False negative.
Controlled by ensuring adequate power.
90. Can a low response rate threaten the
external validity of quantitative research study?
Yes
91. What are the 4 levels of Measurement?
1. Nominal = data classified into categories (ie.
gender, diagnosis)
2. Ordinal = ranked categories (ie. disease stage)
3. Interval = meaningful difference between values
(ie. temperature, shoe size)
4. Ratio = Meaningful difference between values within
absolute zero (ie. height, fatigue score)
92. Descriptive Statistics
Used to present, organize, and summarize the data from
the sample.
1. Univariate statistics
2. Bivariate statistics
93. Inferential Statistics
Used to make inferences about the population from the
sample
94. Univariate Descriptive Statistics
Measures of Central Tendency:
1. Mean
2. Median
3. Mode
Measures of Dispersion:
1. Range
2. Standard Deviation (SD)
3. Variance
95. Positively skewed distribution
/\__
96. Negatively skewed distribution
___/\
97. Contingency Table - Bivariate
descriptive statistics
A two-dimensional frequency distribution; frequencies of
two variables are cross-tabulated.
Cells at intersections of rows and columns display
counts and percentages.
Variables Nominal or Ordinal
98. Correlation = Bivariate descriptive
statistics
Indicates direction and magnitude of relationship
between two variables.
Interval-ratio measures.
Correlation coefficient = Pearson’s r
99. Null Hypothesis
Statistical tests to either reject or fail to reject
the null hypothesis.
H0: There is no difference/relationship between the IV
and DV.
H1: There is a difference.
100. P<0.05
There is less than 5% chance that the difference is
observed given the null hypothesis.
REJECT the null hypothesis
101. P>0.05
There is a greater than 5% chance that the difference
is observed given the null hypothesis.
FAIL TO REJECT the null hypothesis.
102. Statistical Power
A measure of the probability that the statistical test
will detect significant difference/effect IF ONE EXISTS.
103. How is statistical power determined?
1. Effect size (size of treatment/relationship effect)
2. Alpha level
3. Sample size
Smaller effect size = lower power
Smaller alpha = lower power
Power analysis should be done in advance to determine
sample size.
104. Parametric Statistics
For interval-ratio data that are normally distributed.
Ex:
1. T-tests
2. ANOVA
3. Correlation
4. Regression
105. Non-Parametric Statistics
For nominal-ordinal data or non-normally distributed
interval-ratio data.
Examples:
1. Chi-squared (x2) Test
2. Mann Whitney U test
106. What is a 95% Confidence Interval?
Confident that the true point estimate lies within
this range 95 times out of 100.
107. How do you determine differences in
continuous outcomes?
A confidence interval that includes 0 (ie. -0.5 - 4.8)
signifies that it is plausible that there is no difference.
108. How do you determine differences in
odds ratios and relative risk?
A confidence interval that includes 1 (ie. 0.1-3.2)
signifies that it is plausible that there is no difference.
109. Cronbach’s alpha
The mean of all possible split-half correlations.
Measures internal consistency.
110. Convergent validity
A form of criterion validity where the criteria
includes other measures of the same construct
111. Relative Risk
Ratio of probabilities comparing risk of event among
those exposed and not exposed.
Example:
(Risk of event in Treatment Group)/(Risk of event in
control group)
112. Odds ratio
Compares the presence to absence of an exposure given
already known outcome.
Example:
(Odds of event in Treatment group)/(Odds of event in
control group)
113. Triangulation in Quantitative
Research
Using multiple measures of an outcome variable to see
if predicted effects are consistent.
Ex: Sleep diary + Actigraphy + Polysomnography
Qualitative
research methods
1. What does nursing research generally
focus on?
1. Recipients of care
2. Providers of care
3. Health system
2. What does Evidence Based Practice (EBP)
integrate?
1. Best research practice
2. Clinical expertise
3. Patient characteristics/preferences
4. Healthcare resources
5. Clinical stage, setting, and circumstances
3. What are the limitations of EBP?
1. Some forms of knowledge are marginalized
2. Ignores clinical judgement and context
3. Depends on availability of evidence
4. Application of evidence to individuals is
challenging
4. What are the 7 steps in the
Evidence-Based Practice (EBP) process?
1. Admit to uncertainty or different approaches may be
possible
2. Formulate clinical questions
3. Search relevant evidence
4. Critically appraise the evidence
5. Integrate the evidence with clinical expertise +
patient preferences + context
6. Assess the effectiveness of the intervention
7. Disseminate results
5. What are the two key paradigms in
nursing research?
1. Positivist/Post-Positivist Paradigm
2. Constructivist Paradigm
6. Positivist Paradigm
QUANTITATIVE = internal validity
1. Research holds personal beliefs and biases in check
2. Assumes findings are not influenced by the
researcher
3. Deductive processes
4. Disciplined procedures to test ideas
5. Emphasis on measured, quantitative info
Goal: seeks generalizations
7. Constructivist Paradigm
QUALITATIVE = TRUSTWORTHINESS
1. Reality is not fixed - constructed by researcher +
participant
2. Reality exists within a context; many constructions
are possible
3. Emphasis on narrative information
RELATIVISM: no process by which the ultimate truth or
falsity of the constructions can be determined
SUBJECTIVITY: interpretations of participants are key
to understanding the phenomenon of interest
8. What are the phases of Quantitative
research?
1. Conceptual Phase
2. Design and Planning Phase
3. Empirical Phase
4. Analytic Phase
5. Dissemination Phase
9. What are the major classes of
Qualitative research?
1. Grounded theory
2. Phenomenology
3. Ethnography
4. Generic qualitative approaches
5. Others (eg. narrative inquiry, case study)
10. Generalizability
Quantitative research
The extent to which study findings are valid for those
not in the study
11. Transferability
Qualitative research
The extent to which qualitative findings can be
transferred to other settings
12. What are the 3 steps of formulating a
focused clinical question to search for evidence?
1. Start with an initial question
2. Dissect the question into its component parts
(PICO)
3. Formulate the focused (PICO) questions
13. PICO(T)
1. Population
2. Intervention/Exposure
3. Comparison
4. Outcome
5. Time
14. What are the levels of evidence?
1. Systematic Reviews
2. Single Randomized Controlled Trial
3. Single Non-Randomized Trial
4. Single Prospective/Cohort Study
5. Single Case-Control Study
6. Single Cross-Sectional Study (survey)
7. Single in-depth qualitative study
8. Expert Opinion, Case Reports, etc
15. Barriers to EBP
1. Limited knowledge and skills
2. Lack of mentors
3. Inadequate resources
4. Insufficient time to engage in the process
5. Lack of inclusion/involvement in decision-making
16. Facilitators of EBP
1. Appropriate knowledge and skills
2. Organizational culture that supports
evidence-informed practice and nurses’ participation in it
3. Clinical practice Guidelines and pre-processed
evidence
4. Mentorship
17. Conceptual Model
Deals with abstractions, assembled in a coherent
scheme.
Represents a more loosely structured attempt to
explain phenomena that theories
18. Schematic Model
Visually represents relationships among phenomena and
is used in both quantitative and qualitative research.
19. Health Belief Model
Health-seeking behaviour is influenced by a persons’
perception of the THREAT posed by a health problem and the VALUE associated
with the actions aimed to reduce the threat.
1. Perceived susceptibility
2. Perceived benefits
3. Perceived barriers
4. Cue-to-Action
5. Self efficacy
20. Health belief model Perceived
susceptibility
Implications if one got this illness (medical
consequences, social, etc)
21. What are limitations of the Health
Belief Model?
1. Doesn’t account for a persons attitudes/beliefs
2. Doesn’t take account of habitual behaviours
3. Doesn’t account for behaviours that are non-health
related
4. Doesn’t account for economic or environmental
factors
5. Assumes equal access to information on illness or
disease
6. Assumes “health” actions are the main goal
22. What are the 4 different theories in
Qualitative Research?
1. Substantive Theory
2. Grounded Theory
3. Ethnography
4. Phenomenology
23. Grounded Theory
1. Humans act toward things based on the meanings that
the things have for them.
2. The meaning of things is derived from the human
interactions.
3. Meanings are handled in, and modified through, and
interpretive process.
24. What is the aim of theories and
conceptual models?
Aim to describe the phenomena and the relationship
ships among them
25. What is a framework?
Provides overall conceptual underpinnings of a study.
Can be based in theory or a conceptual model.
26. What do high quality studies
demonstrate?
A fit between the framework and the study design and
methods.
27. What are characteristics of
Qualitative Research Design?
1. Emic perspective
2. Triangulating various data collection strategies
3. Holistic
4. Immersion of researchers in setting
5. Requires reflexivity
6. Emergent = data generation and analysis proceed
together
28. Reflexivity
What we know is based on our subject positions. A
critical self-reflection about ones own biases, preferences, and
preconceptions.
29. What are the major Qualitative
Research Traditions (designs)?
1. Phenomenology
2. Ethnography
3. Grounded Theory
30. Ethnography
Describes and interprets “culture”. Seeks an emic
perspective and to reveal tacit knowledge. Assumes culture guides the way people
structure their experiences.
DATA SOURCES = wide spread, observations, interviews,
focus groups, etc
PRODUCT = in-depth portrait of culture
31. Phenomenology
Focuses on description and interpretation of people’s
lived experience
ASKS: what I’d the essence of a phenomenon and what
does it mean?
DATA SOURCE: in-depth convos/interviews
MAIN TYPES: Descriptive and Interpretive
32. Descriptive Phenomenology
Describes human experience (Hussert)
BRACKETING = identifying and “parking” preconceived
views. Acknowledging and removing biases.
33. Interpretive (Hermeneutic)
Phenomenology
Interprets and understands human experience (not just
a description) (Heidegger)
1. Bracketing does NOT occur
- biases are not part of interpretation
- biases are used and embraced
2. Supplementary data sources:
- texts, artistic expression
34. Etic perspective
Outsiders view (that of the researcher)
35. Emic perspectives
Insiders view
36. Grounded Theory
Purpose: to generate theory that explains a pattern of
behaviour of a defined group of people
Elucidates social processes and social structures.
CONSTANT COMPARISON used for theoretical refinement
37. Descriptive Qualitative Studies
Tend to be eclectic in their designs and methods.
Analysis may include content analysis or thematic analysis of narrative data
with intent of understanding important themes and patterns.
38. What is an approach to the study of
social processes and social structures?
Grounded Theory
39. Which process is associated with
descriptive phenomenology?
Bracketing
40. What are the 4 methods of sampling in
Qualitative Research?
1. Convenience (volunteer) sampling
2. Snowball (nominated)
3. Purposive (purposeful)
4. Theoretical Sampling
41. Convenience (volunteer) Sampling
Uses the most conveniently available people as
participants
42. Snowball (nominated) sampling
Early sample members are asked to refer others who
meet the eligibility criteria
43. Purposive sampling
Researchers hand pick the cases that will best
contribute to the study.
Can be classified into various types:
1. Maximum variation
2. Homogenous
3. Extreme/deviant
4. Typical
5. Criterion
6. Confirming/Disconfirming
44. Theoretical Sampling
Involves selecting cases/groups who can provide data
that helps develop an emerging theory
45. How is Qualitative research sample
size determined?
1. Purpose
2. Design
3. Sampling strategy
4. Data quality
5. Other (time, access, etc)
Decisions to stop are guided by DATA SATURATION
46. Sampling by phenomenology
1. Relies on very small samples
2. Participants must have experienced the phenomenon
of interest and be able to articulate that experience
3. May also sample artistic or literary sources
(interpretive phenomenology)
47. Sampling by ethnography design
1. Mingling many members of culture (“big net”
approach)
2. Informal conversations with 25-50 informants
3. Multiple interviews with smaller number of key
informants
4. Involves types of artefacts and foco of observation
48. Sampling by Grounded Theory Design
1. Typically 20-30 participants
2. Select participants who can best contribute to
emerging theory
Usually begins with PURPOSIVE sampling then adjusted
through theoretical sampling.
49. Which type of study would data
saturation NOT be used?
Survey
50. Generating data in Qualitative
Research
1. Qualitative self-reports
(One-on-one, dudas, focus groups)
2. Unstructured observations
3. Artifacts (objects, documents)
51. Field issues in Qualitative Research
1. Gaining trust
2. Pace of data collection
3. Emotional involvement with participants
4. Reflexivity
52. Common qualitative self-report
techniques
1. Unstructured interviews
2. Semi-structured interviews
3. Focus group interviews
4. Others (diaries, photo elicitation, think-aloud
methods)
53. Methods of recording unstructured
observations
1. Logs (field diaries)
2. Field notes
2.a) Descriptive (observational) notes
2.b) Reflective notes:
- Methodologic notes
- Theoretical (analytical) notes
- Personal notes
54. Trustworthiness
The degree of confidence qualitative researchers have
in their data and analysis
Qualitative studies are trustworthy when they
accurately represent the experience/phenomenon under study
55. Lincoln and Guba’s Criteria for
Trustworthiness
1. Credibility
2. Dependability
3. Confirmability
4. Transferability
56. Credibility
Confidence in the truth of data and interpretations
57. Dependability
Stability of data over time and conditions
58. Confirmability
Objectivity of the data; findings reflect the
participants voices and conditions of the inquiry, not solely the researchers
biases or perspectives
59. Transferability
The extent to which findings have meaning to others in
similar settings
60. Strategies to enhance quality of
Qualitative Inquiry during Data Collection
1. Prolonged engagement
2. Persistent observation: intensive focus in salience
of data being gathered
3. Reflexivity strategies
4. Comprehensive and vivid recording
5. Audit trail
6. Member checking
61. Audit trail
A systematic collection of documentation and
materials, and a decision trail that specifies decision rules
62. Member checking
Providing feedback to participants about emerging
interpretations; obtaining their reactions
63. Data triangulation
The use of multiple data sources to validate
conclusions
(time, space, and person triangulation)
64. Investigator Triangulation
The use of two or more researchers to make data
collection, coding, and analysis decisions
65. Method Triangulation
The use of multiple methods of data collection to
study the same phenomenon (eg, self-report + observation)
66. Theory Triangulation
The use of multiple theoretical positions to explore
the same phenomenon
67. Negative case analysis
Specific search for cases that appear to discredit
earlier hypotheses
Improves quality of qualitative inquiry
68. Inquiry audit
A formal scrutiny of the data and relevant supporting
documents and decisions by an external reviewer
Improves quality of qualitative inquiry
69. Things to consider in interpreting
research findings
1. Credibility
2. Meaning
3. Importance
4. Transferability
5. Implications
70. Which type of self reporting involve
the use of a discussion moderator?
Focus-group interviews
71. Emergent
Qualitative characteristic = data generation and
analysis proceed together
72. What are techniques for establishing
credibility?
1. Prolonged engagement
2. Persistent observations
3. Triangulation
4. Peer debriefing
5. Negative case analysis
6. Referential adequacy
7. Member Checking
73. What technique establishes
transferability?
Thick description
74. What technique establishes
dependability?
Inquiry audit
75. What techniques establish
confirmability?
1. Confirmability audit
2. Audit trail
3. Triangulation
4. Reflexivity
76. Operational Definition in Quantitative
Research
A precise statement of how a conceptual variable is
turned into a measurable variable
77. What are the 2 classes of quantitative
research?
1. Experimental
2. Non-Experimental
78. What is a paradigm?
Perspectives of the real world
Thanks
Visit our sites for more updates
www.thebossacadmy.net for study materials, model previous year question papers, books
& journals
www.medjobss.com for all
medical related professional Government jobs, notification, application
& apply online links.
0 Comments