The most appropriate research designs for the evaluation of the efficiency of the training campaign would be the experimental design and the case study. An experimental design is a study where the data is obtained to confirm the hypothesized change produced by the introduction of independent variables over a period of time (Whitley & Kite 2012). Such design can be used to evaluate any number of variables as long as they can be included in the data collection tool. Importantly, such research design allows observing and measuring the occurrence and magnitude of changes. In the case of ABCD, LLC, it can be applied to measure the contribution of training to the employee job performance, employee satisfaction level, employee retention, and an overall efficiency of the organization. Depending on the time of onset of data collection and the collection of data at the baseline, it can also be possible to rule out other factors responsible for the changes in employee performance and satisfaction. Finally, the experimental design can provide information on the duration of the produced outcomes as it can be administered after the termination of the training campaign. In this way, the long-term effects of training can be evaluated, and the need for a new program can be determined.
The case study is a method that involves a close, in-depth examination of a single phenomenon to produce a detailed account of the effects in question (Crossman 2017). While currently it is unclear whether case study would be required for the evaluation of ABCD services, it may become useful in the case when a significant anomaly will be detected in the collected data or when a certain aspect of training will produce results inconsistent with those reported in the literature. Simply put, the longitudinal study will produce an overview of the outcomes whereas the case study will provide necessary details.
Several research designs were considered incompatible with the research at hand. Specifically, a cross-sectional design that is commonly used for measuring the effects of interventions on organizational efficiency was dismissed as it is used to provide a snapshot at a given time. While such design requires less time to conclude, it does not illustrate the sequence of events. Thus, such evaluation will outline the current job performance and employee satisfaction without establishing its relationship to the training provided by the company. The comparative design would be unsuitable since it requires more than one sample to conclude. Since the current research focuses on the staff of a single organization, it would not be possible to apply such design to ABCD. Finally, a longitudinal study was not selected due to its observational nature. While it is consistent with the goals of the evaluation, it requires a non-intrusive observation whereas in the case at hand training services serve as an intervention (Institute for Work and Health 2015). Thus, none of the listed methods is suitable for the assessment.
Due to the specificities of the collected data, the research in question will incorporate both qualitative and quantitative approach. Each of the approaches is relevant for the project due to the specific benefits it provides. The quantitative approach produces definitive data that can be expressed numerically and is thus more universal. More specifically, the findings of the quantitative analysis are objective and are resistant to misinterpretation. More importantly, such results illustrate the outcome of the intervention in an unambiguous manner and with a high degree of accuracy, which is especially relevant for the situation where multiple measurements are performed over a period of time. Thus, the preoccupation of the quantitative approach is objective. In the case of ABCD, several areas of the assessment require quantitative approach, such as benefit to cost ratio, employee retention, and product quality. All of the identified areas involve quantifiable values, making the quantitative approach a preferred solution.
The qualitative approach, on the other hand, focuses on in-depth analysis of perceived qualities of the phenomenon being studied. It is usually used on small samples and requires interpretation of the researchers (Latham 2014). Thus, the results of qualitative studies are difficult to replicate due to the multitude of factors involved. However, qualitative analysis allows for the evaluation of intangible values such as levels of employee satisfaction, which is required for the ABCD training program assessment. It should also be noted that qualitative approach provides a more in-depth analysis of the causes and interconnection of factors responsible for a specific outcome, which is especially important for conducting a case study. Such subjective preoccupation of the approach may reveal the important details that are undetectable by the quantitative analysis. In the case of ABCD, which involves a relatively small sample size, both approaches can be applied simultaneously to save time.
Reliability and Validity
To produce reliable results, several measures should be taken during the assessment process. First, since several observers will conduct measurements, the percentage of agreement between them will be calculated, ensuring inter-rater reliability. Second, the results will be checked for internal consistency using an average inter-item correlation. Third, by collecting data at different points in time and determining the strength of the correlation between the two sets of data, it will be possible to estimate the stability of the observed process (Changing Minds n.d.). The stability is considered high when the shorter time gap is associated with stronger correlation.
In addition, the stakeholders need to agree on the fact that the assessment does indeed measure the quality of training program adequately, thus ascertaining its face validity. Next, it needs to show high correlation with the benchmark test to ensure concurrent validity. Predictive validity will be determined by the results’ relevance for the similar assessments in the future. The quality of the research design will determine its construct validity (Shuttleworth 2017). It can be determined by the expected correlation with the existing findings (convergent validity) and the distinction between constructs that are not expected to converge (discriminant validity).
The first concept necessary for research is job performance, which can be defined as the sum of activities expected from an employee and the degree of success demonstrated during their execution. The variables of the concept are related to the job specificities and in the case of ABCD will likely include the existence of malfunctioning equipment and the inadequate financial and resource expenditures. The second concept is employee retention, which is defined as a capacity of the organization to persuade its employees to stay in the workplace. The involved variables usually include workplace conditions, favorable psychological environment, the availability of incentives, and employee satisfaction. Importantly, employee satisfaction can also be considered a relevant concept. The concept can be defined as the level of satisfaction associated with the working conditions of the employees. The variables routinely include working conditions, workload, the characteristics of co-workers, and perception of management. Because the assessment at hand is conducted to determine the success of the training program, the most relevant variables for the concept of employee satisfaction will be the proficiency in certain activities and workplace tasks, the resulting increased security and sense of accomplishment, and, by extension, the improved self-esteem.
To collect the reliable and accurate data for the project, it is necessary to correctly identify the sample involved. Since the audience that received the training is fairly limited, it would be reasonable to conduct an assessment of the entire population. Technically, such approach would be categorized as non-probability sampling since all participants are known to comply with the inclusion criteria (participated in a training program). To be more exact, such approach would be consistent with the purposive sampling technique, in which the research team collects data only from the individuals that belong to the studied population. The identified sampling approach is suitable for the study at hand as the purposive sample would essentially provide the possibility to include the entire impacted group into the sample while at the same time maintaining a reasonable time frame of the study. It should also be noted that the small sample size may compromise the depth of the inquiry since in this case the probability of excluding the valuable bit of information from the data pool increases (Pennsylvania State University 2017).
Another way to conduct the research in question is to perform a probability sampling. However, since the studied population (the entirety of the workers) is already fairly homogenous, neither the stratified nor the cluster sampling technique would be suitable since both are developed to mitigate the shortcomings produced by heterogeneous population. Thus, the simple random sampling remains the most viable probability option. Nevertheless, it should be emphasized that such approach is only justified when it is either impossible or unreasonable to obtain data from the entire population due to its size. Therefore, the probability sampling technique should be reserved for a scenario when the population is large enough to discourage the use of purposive sampling.
Changing Minds n.d., Types of reliability, 2017, Web.
Crossman, A 2017, The case study research method, Web.
Institute for Work and Health 2015, What researchers mean by… cross-sectional vs. longitudinal studies, Web.
Latham, J 2014, Qualitative sample size – how many participants is enough? Web.
Pennsylvania State University 2017, Simple random sampling and other sampling methods, Web.
Shuttleworth, M 2017, Types of validity, Web.
Whitley, BE & Kite, ME 2012, Principles of research in behavioral science, 3rd edn, Routledge, New York, NY.