Methods empirical research(empirical research methods)

The word "experiential" literally means "that which is perceived by the senses." When this adjective is used in relation to scientific research methods, it serves to designate techniques and methods associated with sensory (feeling) experience. Therefore, they say that empirical methods are based on the so-called. “hard (irrefutable) data” (“hard data”). In addition, empirical research. firmly adheres to the scientific method as opposed to other research methodologies such as naturalistic observation, archival research, etc. The most important and necessary premise underlying the methodology of empirical research. is that it provides the possibility of its reproduction and confirmation/refutation. Bias of empirical research. to “hard data” requires high internal consistency and stability of the means of measurement (and measures) of those independent and dependent variables that are used for the purpose of scientific study. Internal consistency is key. stability condition; measurement tools cannot be highly or even sufficiently reliable unless these tools, which supply raw data for subsequent analysis, produce high intercorrelations. Failure to meet this requirement introduces error variance into the system and results in ambiguous or misleading results.

Sampling techniques

M. e. And. depend on the availability of adequate and effective techniques sample research, providing reliable and valid data, which could be reasonably and without loss of meaning extended to populations from which these representative samples, or at least closely approximating them, were extracted. Although most statistical methods used to analyze empirical data involve essentially random sampling and/or random distribution experimental subjects conditions (groups), randomness per se is not the main issue. Rather, it lies in the undesirability of using primes as test subjects. or exclusively those who constitute extremely limited or refined samples, as in the case of an invitation to participate in a study. volunteer college students, which is widely practiced in psychology and other social sciences. and behavioral sciences. This approach negates the benefits of empirical research. before other research methodologies.

Measurement accuracy

M. e. And. in general - and in psychology in particular - are inevitably associated with the use of multiple measures. In psychology, such measures are used, ch. arr., observed or perceived patterns of behavior, self-reports, etc. psychol. phenomena. It is critical that these measures are sufficiently precise, while also being clearly interpretable and valid. Otherwise, as in the situation with inadequate selective methods, advantages of empirical research methodologies. will be negated by erroneous and/or misleading results. When using psychometrics, the researcher is faced with at least two serious problems: a) the crudeness of even the most sophisticated and reliable instruments available for making measurements of the independent and dependent variables, and b) the fact that any psychol. the measurement is not direct, but indirect. No psychol. the property cannot be measured directly; only its intended manifestation in behavior can be measured. For example, such a property as “aggressiveness” can only be indirectly judged by the degree of its manifestation or recognition by the individual, measured using a special scale or other psychol. an instrument or technique designed to measure varying degrees of “aggressiveness” as defined and understood by the developers of the measurement instrument.

Data obtained as a result of psychological measurements. variables represent only the observed values ​​of these variables (X0). The “true” values ​​(Xi) always remain unknown. They can only be estimated, and this estimate depends on the magnitude of the error (Xe) present in any individual X0. In all psychol. measurements, the observed value represents a certain region rather than a point (as this can happen, for example, in physics or thermodynamics): X0 = Xi + Xe. Therefore, for empirical research. It seems extremely important that the X0 values ​​of all variables turn out to be close to Xi. This can only be achieved through the use of highly reliable measuring instruments and procedures, which are used or implemented by experienced and qualified scientists or specialists.

Control in the experiment

In empirical research. There are 3 types of variables that influence the course of an experiment: a) independent variables, b) dependent variables and c) intermediate, or extraneous, variables. The first 2 types of variables are included in the experiment. plan by the researcher himself; variables of the third type are not introduced by the researcher, but are always present in the experiment - and they should be controlled. Independent variables are related to or reflect environmental conditions that can be manipulated in an experiment; dependent variables relate to or reflect behavioral outcomes. The purpose of the experiment is to vary environmental conditions (independent variables) and observe the behavioral events that occur (dependent variables), while simultaneously controlling (or eliminating the effects) of the influence of any other (extraneous) variables on them.

Control of variables in an experiment, which requires empirical research, can be achieved either with the help of experiments. plan, or using statistical methods.

Experimental plans

As a rule, in empirical research. 3 main ones are used. kind of experimental. designs: a) hypothesis testing designs, b) evaluation designs, and c) quasi-experimental designs. Hypothesis testing plans address the question of whether independent variables influence dependent variables. The statistical tests of significance used in these experiments are typically two-sided; conclusions are formulated in terms of the presence or absence of an effect of environmental manipulation on behavioral outcomes and changes in behavior.

Estimation plans are similar to hypothesis testing plans in that they appeal to quantitative descriptions of variables, but go beyond simple null hypothesis testing, limited to Sec. arr., using two-sided tests of statistical significance. They are used to examine the subsequent question of how independent variables influence observed outcomes. These experiments focus on quantitative and qualitative descriptions of the nature of the relationships among independent variables. Correlation methods are usually used as statistical procedures for data analysis in these experiments. Basic the emphasis is on determining confidence limits and standard errors, and the main objective is to evaluate, with max. as accurately as possible, the true values ​​of the dependent variables for all observed values ​​of the independent variables.

Quasi-experimental designs are similar to hypothesis testing designs, except that in such designs the independent variables are either not available for manipulation or are not manipulated in the experiment. These types of plans are quite widely used in empirical research. in psychology and other social sciences. and behavioral sciences, especially for solving applied problems. They belong to the category of research procedures, which go beyond naturalistic observation, but do not reach the more complex and important levels of the other two basic principles. types of experimental plans.

The role of statistical analysis

Psychol. research, empirical or not, is based on Ch. arr. on data obtained from samples. Therefore M. e. And. need addition statistical analysis these sample data so that reasonable conclusions can be formulated about the results of hypothesis testing.

Empirical testing of hypotheses

The most valuable experiment. plan for conducting empirical research. in psychology and related sciences is a design for testing hypotheses. Therefore, here we should give a definition of “hypothesis”, linked to the methodology of empirical research. An exceptionally precise and concise definition is given by Brown and Ghiselli.

A hypothesis is a statement about factual and conceptual elements and their relationships that goes beyond known facts and accumulated experience to achieve improved understanding. It is an assumption or a lucky guess containing a condition that has not yet been actually demonstrated, but which deserves investigation.

Empirical confirmation of several. interrelated hypotheses leads to the formulation of a theory. Theories that are invariably confirmed by empirical results of repeated studies. - especially if they are accurately described using mat. equations - inevitably acquire the status of a scientific law. In psychology, however, scientific law is an elusive concept. Most psychol. theories are based on empirical testing of hypotheses, but today there is no psychol. theories that would reach the level of scientific law.

See also Confidence Boundaries, Control Groups

Empirical methods and psychological assistance

The group of empirical methods in psychology is traditionally considered the main one.

Observation is the oldest method of knowledge. Its primitive form - everyday observations - is used by every person in his daily practice. Observation appears in psychology in two main forms - as introspection, or introspection, and as external, or so-called objective observation (psychotherapist).

The general observation procedure consists of the following processes: defining the task and goal; choice of object, subject and situation; choosing an observation method that has the least impact on the object under study and most ensures the collection of the necessary information; selection of methods for recording observed phenomena; processing and interpretation of received information.

The observation method consists of recording and reflecting the behavioral reactions of another person. The observer (psychotherapist, psychologist) takes a passive position, he only observes. Scientific observation has its own criteria. The observer should be a professional psychologist who is well aware of the capabilities of this method. Only a psychologist or psychotherapist can correctly interpret this or that fact. An observer psychologist can remove the subjectivity of judgments regarding observed facts. Observing other people, we correlate the internal content with a person’s external reactions, but only a professional psychologist can solve the problem of measuring the correspondence between external reactions and a person’s internal content.

As a rule, the phenomenon to be studied is observed under its usual conditions, without making any changes. One of the main requirements for this method is the presence of a clear target setting. In accordance with the purpose, an observation plan must be determined, recorded in the diagram. Planned and systematic observation constitutes its most essential feature as a scientific method. They must eliminate the element of chance inherent in everyday observation. If observation comes from a clearly realized goal, then it must acquire a selective character. It is absolutely impossible to observe everything in general due to the limitless diversity of what exists. Any observation, therefore, is selective, or selective, partial.

Observation becomes a method only insofar as it is not limited to simply recording facts, but proceeds to the formulation of hypotheses in order to test them against new observations. The separation of subjective interpretation from the objective and the exclusion of the subjective is carried out in the process of observation itself, combined with the formulation and testing of hypotheses.

The psychological interpretation of external data itself is not directly given; it must be found on the basis of hypotheses that must be verified in observation, i.e. description must turn into explanation - the fate of scientific research depends on this.

The main advantage of the objective observation method is that it allows the study of mental processes in natural conditions. However, objective observation, while retaining its importance, for the most part must be supplemented by other research methods.

Observation is carried out not once, but systematically in relation to the same person and in relation to the same phenomenon in many persons and in various, most characteristic situations. Observation is used primarily when minimal interference in natural behavior and relationships between people is required, when they strive to obtain a holistic picture of what is happening.

Distinguish the following types observations: cross-sectional (short-term observation), longitudinal (long-term, with a psychotherapist in Moscow, sometimes for a number of years). Observation can be laboratory or natural. Laboratory observation is observation in artificial conditions, most often in laboratories. Natural observation is observation in conditions and surroundings familiar to a person.

Observation may be included or not. With participant observation, the observer is involved in the activity in which the observed are involved. The observed in this case know nothing about the observation being carried out. With non-participant observation, the roles of people are distributed: some of them are observed, and others are observers. The observed are aware of the observation.

The observation method can be structured or unstructured. In the first case, the structure of the observed facts is strictly subdivided. In the second case, observation is carried out over the entire set of facts.

Observation can be continuous or selective. With continuous observation, all behavioral reactions are recorded. Selective observation involves limiting the observation area.

Observation can be direct or indirect. In direct observation, the study is carried out by the person himself, who draws conclusions based on the results of this observation. Vicarious observation occurs when one receives information about another person's observation.

The observation method is not without its drawbacks. Attitudes, interests, psychological states, and personal characteristics of the observer can greatly influence the results of observation. The more the observer is focused on confirming his hypothesis, the greater the distortion in the perception of events. He selectively perceives only part of what is happening. Prolonged observation leads to fatigue, adaptation to the situation, and a feeling of monotony, which increases the risk of inaccurate recordings. There is a certain difficulty in interpreting the data. In addition, observation requires a significant investment of time.

One of the types of observation is introspection, direct or delayed (in memories, notes, diaries, a person analyzes his feelings, thoughts and experiences). The method of introspection (self-observation) consists of both observing one’s outwardly expressed activities, psychologically significant facts from life, and in observing one’s own inner life, for your mental state. The scientific value of self-observation data depends on how objective they are, how they correspond real facts. As experimental studies show, people tend to overestimate their strengths and minimize their shortcomings. However, in combination with other methods, the self-observation method can produce positive results.

The experimental method represents the active intervention of a psychotherapist researcher in the activities of the subject in order to create conditions in which a psychological fact is revealed. The main advantages of all types of experiments are that it is possible to specifically induce some kind of mental process, to trace the dependence of a mental phenomenon on changeable external conditions.

The history of science has proven the leading role experimental method in the development of scientific knowledge. Suffice it to recall the fact that psychology, one of the oldest sciences, separated from philosophy into an independent branch of knowledge only in the middle of the 19th century, when systematic experimentation in psychology began (W. Fechner, E. Weber, W. Wundt, etc.).

The experimental method is aimed at studying mental phenomena in specially created conditions and involves active position experimenter in relation to the subject. During the experiment, the human psyche itself changes; he can change externally and internally. An experiment is a research activity for the purpose of studying cause-and-effect relationships, which assumes that the experimenter himself causes the phenomenon he is studying and actively influences it, changing the conditions under which the phenomenon occurs. The experiment allows you to repeatedly reproduce the results and establish quantitative patterns. The main task of a psychological experiment is to make the essential features of the internal mental process acceptable for objective external observation.

Experiment as a method of psychology arose in the field of psychophysics and psychophysiology and became widespread. The expansion of the use of experiment moved from the elementary processes of sensation to higher mental processes. The very nature of the experiment also changed: from studying the relationship between a separate physical stimulus and the corresponding mental process, he moved on to studying the patterns of the occurrence of mental processes themselves under certain objective conditions.

Analysis of the experiment as scientific activity allows us to identify a system of necessary research stages:

I theoretical stage of the study (problem formulation). At this stage the following tasks are solved:
a) formulation of the problem and research topic,
b) definition of the object and subject of research,
c) determination of experimental tasks and research hypotheses. It is important that the title of the topic includes the basic concepts of the subject of research.
The boundaries of the subject of research should be established taking into account the purpose and objectives of the study; object of study; material and time opportunities for experimentation; results of scientific development of the issue.
II methodological stage of the study. At this stage, the experimental methodology and experimental design are developed. The experimental technique must reproduce the subject of research in the form of a variable experimental situation.
In an experiment, two series of variables are distinguished: independent and dependent. The factor changed by the experimenter is called the independent variable; a factor that is changed by an independent variable is called a dependent variable.
The development of an experimental plan involves drawing up an experimentation program as a work plan and a sequence of experimental procedures and mathematical planning for processing experimental data, i.e. mathematical model for processing experimental results;
III experimental stage. At this stage, direct experiments are carried out, associated with the creation of an experimental situation, observation, control of the course of the experiment and measurement of the reactions of the subjects.
The main problem of this stage is to create in the subjects an identical understanding of the task of their activity in the experiment. This problem is solved through the reproduction of constant conditions for all subjects and instructions, which serve as a single setting for activity. At this stage, the role of the experimenter and his behavior are very important, since the subjects include his personality in the context of the experimental situation. The instruction aims to lead all subjects to a common understanding of the task, acting as a kind of psychological attitude;
IV analytical stage. At this stage, a quantitative analysis of the results (mathematical processing), a scientific interpretation of the obtained facts, and the formulation of new scientific hypotheses and practical recommendations are carried out.

It should be taken into account that the mathematical coefficients of variation statistics in themselves do not reveal the essence of the studied mental properties of a person, since they are external in relation to their essence, describing only the probabilistic outcome of their manifestation and the relationship between the frequencies of the compared events, and not between their essences. The essence of phenomena is revealed through subsequent scientific interpretation as a comparison of empirical facts according to the logic of cause-and-effect relationships modeled in an experimental situation.

An experiment can be laboratory, natural, mental, legislative, formative (educational), associative.

Laboratory experiment takes place in special conditions using special equipment. The actions of the subject are determined by the instructions. As a rule, the subject knows that an experiment is being conducted, although he may not know the true meaning of the experiment. The experiment is carried out repeatedly with a large number of subjects, which makes it possible to establish general mathematically and statistically reliable patterns of development of mental phenomena.

The disadvantage of this method is the difficulty of using laboratory technology in practical conditions, as well as the difference between the course of mental processes in laboratory conditions and their course under normal conditions (artificiality, abstractness of the experiment).

In a natural experiment, its participants perceive everything that happens as a genuine event, although the phenomenon being studied is placed by the experimenter in the conditions he needs and is subject to objective recording. A natural experiment is like an intermediate form between observation and experiment. It was proposed by the Russian scientist A.F. Lazursky (1910). This method combines experimental research with natural conditions. The logic of this method is as follows: the conditions in which the activity being studied occurs are subject to experimental influence, while the activity itself is observed in its natural course. Instead of translating the phenomena being studied into laboratory conditions, researchers try to account for the influences and select natural conditions that suit their purposes.

In a thought experiment, it is assumed that all changes occur in the imagination of the person who experiments with imaginary images.

A formative (educational) experiment acts as a means of influencing and changing the psychology of people. Its originality lies in the fact that it simultaneously serves as both a means of research and a means of shaping the phenomenon being studied. A formative experiment is characterized by the active intervention of the researcher in the mental processes he is studying. A formative experiment involves designing and modeling the content of mental new formations being formed, psychological and pedagogical means and ways of their formation. One of the founders of the formative experiment in our country, V.V. Davydov, calls this type of experiment genetic modeling, since it embodies the unity of the study of mental development with education and training.

This method is based on the design and redesign of new educational and training programs and methods of their implementation, aimed at studying mental phenomena in the process of education and training through the introduction of the most active teaching methods, with the help of which the professionally important qualities of a future specialist are formed.

The association experiment was first proposed by the English psychologist F. Galton and developed by the Austrian scientist K. Jung. Its essence is that the subject is asked to answer each word with the first word that comes to his mind. In all cases, reaction time is taken into account, i.e. interval between word and answer.

Psychodiagnostic methods are designed to record and describe in an orderly form psychological differences, both between people and between groups of people united by some (not always related to psychology) characteristics. The number of diagnosed signs, depending on the objectives of the study, may include psychological differences in age, gender, education and culture in the broadest sense of these terms, mental states, psychophysiological characteristics, etc.

Psychological tests are a system of special tasks that allow you to measure the level of development or state, certain mental qualities or properties of an individual. English word test means "trial" or "trial". The term was introduced into the practice of psychological research at the end of the last century by the American scientist J. Cattell. Tests have become widespread and have practical significance since A. Binet, together with T. Simon, developed his system for determining the mental development or giftedness of children.

A test is a short, standardized test that, as a rule, does not require complex technical devices, is amenable to standardization and mathematical processing of data, the results of which determine the presence and level of development of certain mental qualities of a person.

The main advantage of the test is that it allows you to quantitatively assess mental qualities that are difficult to measure - intelligence, personality traits, the threshold for anxiety, etc. Tests differ from other research methods in that they require a clear procedure for collecting and processing primary data, as well as the originality of their subsequent interpretation.

Currently, the test method is used in psychology along with other methods. With its help, they strive to identify certain abilities, skills, abilities (or lack thereof), most accurately characterize certain personality qualities, identify the degree of suitability for work in a particular field, etc.

The diagnostic value of the test largely depends on the level of the scientific experiment and the reliability of the psychological fact that was used as the basis for the test, i.e. depends on how the test was constructed: whether it was the result of extensive preliminary experimental work or the result of rough, random and superficial observations. Insufficiently substantiated and verified psychological tests can cause serious errors that can cause significant damage in the field of professional selection, in teaching practice, in the diagnosis of defects and temporary delays in mental development.

Tests must be scientifically valid and identify persistent psychological characteristics.

The development and use of any tests must meet certain requirements:

1. The reliability of the tests is manifested in the exclusion of a number of random or systematic errors in data collection and measurement.
2. The validity (adequacy) of the test depends on the extent to which the test measures the mental quality for which it is intended to assess.
3. Standardization of tests involves linear or nonlinear transformation test scores, the meaning of which is to replace the original assessments with new, derivative ones that make it easier to understand the test result.
4. Comparability of individual data with normative ones.
5. Practicality - in the form of sufficient simplicity, economy, and efficiency of use for most different situations and types of activities.

Testing involves: considering the phenomenon being studied as a system, that is, as a delimited set of interacting elements; determination of the composition, structure, organization of elements and parts of the system, detection of leading interactions between them; identifying external connections of the system, identifying the main ones; determining the function of the system and its role among other systems; detection on this basis of patterns and trends in the development of the system. When analyzing psychological phenomena, it is necessary to consider them as complexly organized objects, consisting of subsystems and included, in turn, as subsystems in systems of more high level. It is important to identify the variety of elements included in the structure of a socio-psychological phenomenon, all the connections between them, as well as the relationship of the psychological phenomenon being studied with phenomena external to it.

Systems approach guides the psychologist in the methodology of searching for the causes of positive or negative trends in the development of a particular psychological phenomenon. If similar positive or negative aspects have appeared not in one, but in several elements of the system, then the reasons for this should, first of all, be sought not in these elements, but in the system itself.

The application of the test method must be carried out taking into account all specific conditions (place, time, specific current situation).

The survey method is finding out a person’s opinion on any issue or problem, obtaining information about objective and subjective facts from the words of the respondents. This method assumes that we turn to a person’s subjective experience, to his individual opinion.

The variety of survey methods used in psychological research can be reduced to two main types:

1) “face-to-face” survey - an interview conducted by a researcher on a specific topic;
2) correspondence survey - questionnaires intended for self-completion.

Oral questioning is a method traditional for psychological research, and has long been used by psychologists of various scientific schools and directions. During the interview, various questions can be asked: direct (assume a correspondence between the wording and what the interviewer wants to know), indirect (the wording and the goal do not correspond to each other), projective (for example, a person is asked about people from his environment, while receiving information about himself), open (suggest certain answer options), closed (suggest the presence of many answer options), suggestive, suggestive, etc.

Surveys as methods of collecting primary information have certain limitations. Their data is largely based on the self-observation of the respondents and often indicates, even with complete sincerity on the part of the respondents, not so much about their sincere opinions, but about how they portray them.

The scope of surveys in psychological research is broad and includes: the early stages of research, work on an exploratory plan, when, using interview data, variables related to the problem under study are established and working hypotheses are put forward; obtaining data to measure the relationship between the variables being studied; clarification, expansion and control of data obtained both by other methods and through one or another form of survey.

There are two types of interviews - standardized and non-standardized. In a standardized interview, the wording of questions and their sequence are determined in advance and are the same for all respondents. The researcher is not allowed to reformulate any questions or introduce new ones, or change their order.

The non-standardized interview technique, on the contrary, is characterized by complete flexibility and varies widely. The researcher, who is guided only by the general interview plan, has the right, in accordance with the specific situation, to formulate questions and change the order of the points of the plan. The advantage of a non-standardized interview is obtaining more in-depth information, flexibility of the survey; The disadvantage is the comparative narrowness of the coverage of respondents. A combination of questionnaires and interviews is usually recommended, since this technique, along with covering a large number of respondents in a relatively short term allows you to obtain material for in-depth analysis.

Questioning (correspondence survey) also has its own specifics. It is believed that it is more expedient to resort to correspondence surveys in cases where it is necessary either to find out people’s attitudes to sensitive controversial or intimate issues, or to survey a large number of people in a relatively short period of time. The main advantage of surveys is the possibility of mass coverage large quantity persons A questionnaire guarantees anonymity to a greater extent than an interview, and therefore respondents can give more sincere answers. However, surveys cannot be conducted without certain working hypotheses.

Conversation as a psychological method is an auxiliary means for additional coverage of the problem being studied. The conversation should always be planned in accordance with the objectives of the study. Questions asked in a conversation can be like tasks aimed at identifying the qualitative uniqueness of the processes being studied. But at the same time, such tasks should be as natural and non-standard as possible. Being planned, the conversation should not be of a template-standard nature; it should always be as idealized as possible and combined with other objective methods.

The conversation must meet certain conditions. The best results come from a conversation when a relaxed personal contact between the researcher and the subject is established. In this case, the conversation should be thought out in advance with the drawing up of a specific plan, highlighting the main problems to be clarified.

The conversation method also involves asking questions by the subjects themselves. Such a two-way conversation provides more information on the problem under study than just the subjects’ answers to the questions posed.

The study of the products of activity - as a research method is widely used in history (allows us to study human psychology in previous historical eras), children's (study of the products of children's creativity for the psychological study of the child), legal (study of the characteristics of psychological manifestations subject in his absence) psychology.

This method is used when direct observation or experimentation is impossible or unavailable.

A variation of the method of studying the products of activity is the biographical method. The materials used here are letters, diaries, biographies, products of children's creativity, handwriting, etc. In many cases, with the aim psychological research not one, but several methods are used, each of which complements the others, revealing new aspects of mental activity.

Ministry of Education and Science of Ukraine

Donbass State Technical University

Faculty of Management

ABSTRACT

in the discipline: “Methodology and organization of scientific research”

on the topic: “Empirical research methods”


INTRODUCTION

6. Methods involving working with the obtained empirical information

7. Methodological aspects

LITERATURE


INTRODUCTION

Modern science has reached its current level largely thanks to the development of its toolkit - methods of scientific research. All existing now scientific methods can be divided into empirical and theoretical. Their main similarity is the common goal - establishing the truth, the main difference is the approach to research.

Scientists who consider empirical knowledge to be the main thing are called “practitioners,” and supporters of theoretical research are called “theorists,” respectively. The emergence of two opposing schools of science is due to the frequent discrepancy between the results of theoretical research and practical experience.

In the history of knowledge, two extreme positions have emerged on the issue of the relationship between the empirical and theoretical levels of scientific knowledge: empiricism and scholastic theorizing. Supporters of empiricism reduce scientific knowledge as a whole to the empirical level, belittling or completely rejecting theoretical knowledge. Empiricism absolutizes the role of facts and underestimates the role of thinking, abstractions, and principles in their generalization, which makes it impossible to identify objective laws. The same result is also achieved when they recognize the insufficiency of bare facts and the need for their theoretical understanding, but do not know how to operate with concepts and principles or do this uncritically and unconsciously.


1. Methods for isolating and studying an empirical object

Empirical research methods include all those methods, techniques, methods of cognitive activity, as well as the formulation and consolidation of knowledge that are the content of practice or its direct result. They can be divided into two subgroups: methods of isolating and studying an empirical object; methods of processing and systematizing the received empirical knowledge, as well as the corresponding forms of this knowledge. This can be represented using a list:

⁻ observation - a method of collecting information carried out on the basis of registration and recording of primary data;

⁻ study primary documentation– based on a study of documented information directly recorded earlier;

⁻ comparison – allows you to compare the object under study with an analogue;

⁻ measurement - a method of determining the actual numerical values ​​of indicators of the properties of the object under study using appropriate measuring units, for example, watts, amperes, rubles, standard hours, etc.;

⁻ normative – involves the use of a set of certain established standards, comparison with which the real indicators of the system allows us to establish the compliance of the system, for example, with the accepted conceptual model; standards can: determine the composition and content of functions, the labor intensity of their implementation, the number of personnel, type, etc. act as standards of defining norms (for example, the costs of material, financial and labor resources, controllability, the number of permissible levels of management, the labor intensity of performing functions) and consolidated values ​​determined in the form of a relationship to any complex indicator (for example, the working capital turnover standard; all norms and standards must cover the entire system as a whole, be scientifically based, have a progressive and promising nature);

⁻ experiment - based on the study of the object under study in artificially created conditions for it.

When considering these methods, it should be taken into account that in the list they are arranged according to the degree of increase in the researcher’s activity. Of course, observation and measurement are included in all types of experiments, but they should also be considered as independent methods, widely represented in all sciences.

2. Observation of empirical scientific knowledge

Observation is the primary and elementary cognitive process at the empirical level of scientific knowledge. As scientific observation, it consists of a purposeful, organized, systematic perception of objects and phenomena of the external world. Features of scientific observation:

Relies on a developed theory or individual theoretical provisions;

Serves to solve a specific theoretical problem, pose new problems, put forward new or test existing hypotheses;

Has a justified, systematic and organized nature;

It is systematic, excluding random errors;

Uses special observation equipment - microscopes, telescopes, cameras, etc., thereby significantly expanding the scope and capabilities of observation.

One of the important conditions of scientific observation is that the collected data is not only personal, subjective in nature, but under the same conditions can be obtained by another researcher. All this speaks of the necessary accuracy and thoroughness in the application of this method, where the role of a particular scientist is especially significant. This is well known and goes without saying.

However, in science there are cases when discoveries were made due to inaccuracies and even errors in observation results. T

A theory or an accepted hypothesis allows for targeted observation and the discovery of what goes unnoticed without theoretical guidance. However, it should be remembered that a researcher “armed” with a theory or hypothesis will be quite biased, which, on the one hand, makes the search more effective, but on the other hand, can weed out all contradictory phenomena that do not fit into this hypothesis. In the history of methodology, this circumstance gave rise to an empirical approach in which the researcher sought to completely free himself from any hypothesis (theory) in order to guarantee the purity of observation and experience.

In observation, the subject's activity is not yet aimed at transforming the object of study. The object remains inaccessible to purposeful change and study or is consciously protected from possible influences in order to preserve its natural state, and this is the main advantage of the observation method. Observation, especially with the inclusion of measurement, can lead the researcher to assume a necessary and natural connection, but in itself it is completely insufficient to assert and prove such a connection. The use of devices and instruments unlimitedly expands the possibilities of observation, but does not overcome some other shortcomings. In observation, the dependence of the observer on the process or phenomenon being studied is preserved. The observer cannot, while remaining within the boundaries of observation, change the object, manage it and exercise strict control over it, and in this sense, his activity in observation is relative. At the same time, in the process of preparing an observation and during its implementation, the scientist, as a rule, resorts to organizational and practical operations with the object, which brings observation closer to experiment. Another thing is obvious - observation is a necessary component of any experiment, and then its tasks and functions are determined in this context.

3. Obtaining information using the empirical method

empirical object research information

Techniques for obtaining quantitative information are represented by two types of operations - counting and measurement in accordance with the objective differences between discrete and continuous. As a method of obtaining accurate quantitative information in the counting operation, numerical parameters consisting of discrete elements are determined, and a one-to-one correspondence is established between the elements of the set that makes up the group and the numerical signs with which the count is carried out. The numbers themselves reflect objectively existing quantitative relationships.

It should be realized that numerical forms and signs perform the most important functions in both scientific and everyday knowledge. various functions, of which not all are related to measurement:

They are means of naming, unique labels or convenient identifying marks;

They are a counting instrument;

Act as a sign to designate a specific place in an ordered system of degrees of a certain property;

They are a means of establishing equality of intervals or differences;

They are signs expressing quantitative relationships between qualities, i.e., means of expressing quantities.

When considering various scales based on the use of numbers, it is necessary to distinguish between these functions, which are alternately performed either by a special symbolic form of numbers, or by numbers acting as semantic values ​​of the corresponding numerical forms. From this point of view, it is obvious that naming scales, examples of which are the numbering of athletes in teams, cars in the State Traffic Inspectorate, bus and tram routes, etc., are neither measurement nor even inventory, since here numerical forms perform the function of naming, but not bills.

The method of measurement in the social sciences and humanities remains a serious problem. These are, first of all, the difficulties of collecting quantitative information about many social, socio-psychological phenomena, for which in many cases there are no objective, instrumental means of measurement. Methods for isolating discrete elements and the objective analysis itself are also difficult, not only due to the characteristics of the object, but also due to the interference of non-scientific value factors - prejudices of everyday consciousness, religious worldview, ideological or corporate prohibitions, etc. It is known that many so-called assessments , for example, the knowledge of students, the performances of participants in competitions and competitions even at the highest level, often depend on the qualifications, honesty, corporate spirit and other subjective qualities of teachers, judges, and jury members. Apparently, this kind of assessment cannot be called a measurement in in the exact sense a word that presupposes, as defined by the science of measurement - metrology, comparison through a physical (technical) procedure of a given quantity with one or another value of an accepted standard - a unit of measurement and obtaining an accurate quantitative result.


4. Experiment - the basic method of science

Both observation and measurement are included in such a complex basic method of science as experiment. In contrast to observation, an experiment is characterized by the researcher’s intervention in the position of the objects being studied, the active influence of various instruments and experimental means on the subject of research. An experiment is a form of practice that combines the interaction of objects according to natural laws and an action artificially organized by man. As a method of empirical research, this method assumes and allows the following operations to be carried out in accordance with the problem being solved:

₋ object constructivization;

₋ isolating an object or subject of research, isolating it from the influence of side phenomena that obscure the essence, studying it in a relatively pure form;

₋ empirical interpretation of initial theoretical concepts and provisions, selection or creation of experimental means;

₋ purposeful influence on an object: systematic change, variation, combination of various conditions in order to obtain the desired result;

₋ repeated reproduction of the process, recording data in observation protocols, processing them and transferring them to other objects of the class that have not been subjected to research.

The experiment is not carried out spontaneously, not at random, but to solve certain scientific problems and cognitive tasks dictated by the state of the theory. It is necessary as the main means of accumulation in the study of facts that form the empirical basis of any theory; it is, like all practice in general, an objective criterion of the relative truth of theoretical positions and hypotheses.

The subject structure of the experiment allows us to isolate the following three elements: the knowing subject (experimenter), the means of the experiment, the object of the experimental study.

On this basis, a branched classification of experiments can be given. Depending on the qualitative differences in the objects of research, one can distinguish between physical, technical, biological, psychological, sociological, etc. The nature and variety of means and experimental conditions make it possible to distinguish between direct (natural) and model, field and laboratory experiments. If we take into account the goals of the experimenter, then search, measurement and testing types of experiments are distinguished. Finally, depending on the nature of the strategy, one can distinguish between experiments carried out by trial and error, experiments based on a closed algorithm (for example, Galileo’s study of falling bodies), experiments using the “black box” method, “step strategy”, etc.

The classical experiment was based on methodological premises that, to one degree or another, reflected Laplace’s ideas about determinism as an unambiguous cause-and-effect relationship. It was assumed that, knowing the initial state of the system under certain constant conditions, one can predict the behavior of this system in the future; you can clearly identify the phenomenon being studied, implement it in the desired direction, strictly order all interfering factors, or ignore them as unimportant (for example, exclude the subject from the results of cognition).

The growing importance of probabilistic-statistical concepts and principles in the actual practice of modern science, as well as the recognition of not only objective certainty, but also objective uncertainty and the understanding in this regard of determination as relative uncertainty (or as a limitation of uncertainty) has led to a new idea of ​​the structure and principles experiment. The development of a new experimental strategy is directly caused by the transition from studying well organized systems, in which it was possible to isolate phenomena depending on a small number of variables, to the study of so-called diffuse or poorly organized systems. In these systems, it is impossible to clearly distinguish individual phenomena and distinguish between the effects of variables of different physical natures. This required a more widespread use of statistical methods; in fact, it introduced the “concept of chance” into the experiment. The experimental program was created in such a way as to diversify numerous factors to the maximum and take them into account statistically.

Thus, the experiment from a single-factor, strictly determined one, reproducing unambiguous connections and relationships, has turned into a method that takes into account many factors of a complex (diffuse) system and reproduces single- and multivalued relationships, i.e., the experiment has acquired a probabilistic deterministic character. In addition, the experimental strategy itself is also often not strictly determined and can change depending on the results at each stage.

Material models reflect corresponding objects in three forms of similarity: physical similarity, analogy and isomorphism as a one-to-one correspondence of structures. A model experiment deals with a material model, which is both an object of study and an experimental tool. With the introduction of the model, the structure of the experiment becomes significantly more complicated. Now the researcher and the device interact not with the object itself, but only with a model that replaces it, as a result of which the operational structure of the experiment becomes significantly more complicated. The role of the theoretical side of the study is strengthened, since it is necessary to substantiate the similarity relationship between the model and the object and the ability to extrapolate the obtained data to this object. Let's consider what the essence of the extrapolation method is and its features in modeling.

Extrapolation as a procedure for transferring knowledge from one subject area to another - not observed and unstudied - based on some identified relationship between them is one of the operations that have the function of optimizing the cognition process.

In scientific research, inductive extrapolations are used, in which a pattern established for one type of object is transferred, with certain clarifications, to other objects. Thus, having established, for example, the compression property for a certain gas and expressed it in the form of a quantitative law, one can extrapolate this to other, unexplored gases, taking into account their compression ratio. In exact natural science, extrapolation is also used, for example, when extending an equation describing a certain law to an unstudied area ( mathematical hypothesis), and a possible change in the form of this equation is assumed. In general, in experimental sciences, extrapolation refers to the distribution of:

Qualitative characteristics from one subject area to another, from the past and present to the future;

Quantitative characteristics of one area of ​​objects to another, one unit to another, based on methods specially developed for this purpose;

Some equation for other subject areas within one science or even for other areas of knowledge, which is associated with some modification and (or) reinterpretation of the meaning of their components.

The procedure for transferring knowledge, being only relatively independent, is organically included in such methods as induction, analogy, modeling, mathematical hypothesis, statistical methods and many others. In the case of modeling, extrapolation is part of the operational structure of this type of experiment, consisting of the following operations and procedures:

Theoretical justification of the future model, its similarity with the object, i.e., the operation that ensures the transition from the object to the model;

Building a model based on similarity criteria and the purpose of the study;

Experimental study of the model;

The operation of transition from a model to an object, i.e. extrapolation of the results obtained from studying the model to the object.

Typically, scientific modeling uses elucidated analogy, specific cases of which are, for example, physical similarity and physical analogy. It should be noted that the conditions for the validity of analogy were developed not so much in logic and methodology, but in the special engineering and mathematical theory of similarity that underlies modern scientific modeling.

The similarity theory formulates the conditions under which the legitimacy of the transition from statements about a model to statements about an object is ensured, both in the case when the model and the object belong to the same form of motion (physical similarity), and in the case when they belong to various forms of motion of matter (physical analogy). Such conditions are the similarity criteria that are clarified and observed during modeling. For example, in hydraulic modeling, which is based on mechanical similarity laws, geometric, kinematic and dynamic similarity must be observed. Geometric similarity assumes a constant relationship between the corresponding linear dimensions of the object and model, their areas and volumes; kinematic similarity is based on a constant ratio of velocities, accelerations and time intervals during which similar particles describe geometrically similar trajectories; finally, the model and the object will be dynamically similar if the ratios of masses and forces are constant. It can be assumed that compliance with the specified relationships determines the receipt of reliable knowledge when extrapolating model data to the object.

The considered empirical methods of cognition provide factual knowledge about the world or facts in which specific, immediate manifestations of reality are recorded. The term fact is ambiguous. It can be used both in the meaning of some event, a fragment of reality, and in the meaning of a special kind of empirical statements - fact-fixing sentences, the content of which it is. Unlike facts of reality, which exist independently of what people think about them and are therefore neither true nor false, facts in the form of propositions are truth-evaluable. They must be empirically true, that is, their truth is established experimentally, practically.

Not every empirical statement receives the status of a scientific fact, or rather, a sentence fixing a scientific fact. If statements describe only isolated observations, a random empirical situation, then they form a certain set of data that do not have the necessary degree of generality. In the natural sciences and in a number of social sciences, for example: economics, demography, sociology, as a rule, statistical processing of a certain set of data takes place, making it possible to remove the random elements contained in them and, instead of many statements about the data, obtain a summary statement about this data, which acquires the status of a scientific fact.

5. Scientific facts of empirical research

As knowledge, scientific facts are distinguished by a high degree (probability) of truth, since they record the “immediately given”, describe (and not explain or interpret) the fragment of reality itself. A fact is discrete, and therefore, to a certain extent, localized in time and space, which gives it a certain accuracy, and even more so because it is a statistical summary of empirical data, cleared of randomness, or knowledge that reflects what is typical and essential in an object. But a scientific fact is both relative and true knowledge, it is not absolute, but relative, that is, capable of further clarification, change, since the “immediately given” includes elements of the subjective; the description can never be exhaustive; both the object described in the knowledge fact and the language in which the description is carried out change. Being discrete, a scientific fact is at the same time included in a changing system of knowledge; the very idea of ​​what a scientific fact is changes historically.

Since the structure of a scientific fact includes not only that information that depends on sensory knowledge, but also its rational foundations, the question arises about the role and forms of these rational components. Among them are logical structures, conceptual apparatus, including mathematical, as well as philosophical, methodological and theoretical principles and prerequisites. A particularly important role is played by the theoretical prerequisites for obtaining, describing and explaining (interpreting) a fact. Without such prerequisites, it is often impossible to even detect certain facts, much less understand them. The most famous examples from the history of science are the discovery by astronomer I. Galle of the planet Neptune according to preliminary calculations and predictions of W. Le Verrier; opening chemical elements, predicted by D.I. Mendeleev in connection with his creation of the periodic system; detection of the positron, theoretically calculated by P. Dirac, neutrino, predicted by W. Pauli.

In natural science, facts, as a rule, appear already in theoretical aspects, since researchers use instruments in which theoretical schemes are objectified; Accordingly, the empirical results are subject to theoretical interpretation. However, despite the importance of these points, they should not be absolute. As research shows, at any stage of the development of a particular natural science, one can discover a vast layer of fundamental empirical facts and patterns that have not yet been comprehended within the framework of grounded theories.

Thus, one of the most fundamental astrophysical facts about the expansion of the Metagalaxy was established as a statistical summary of numerous observations of the phenomenon of “red shift” in the spectra of distant galaxies, carried out since 1914, as well as the interpretation of these observations as due to the Doppler effect. Certain theoretical knowledge from physics was, of course, involved for this, but the inclusion of this fact in the system of knowledge about the Universe occurred independently of the development of the theory within the framework of which it was understood and explained, i.e., the theory of the expanding Universe, especially since it appeared many years after the first publications about the discovery of redshift in the spectra of spiral nebulae. The theory of A. A. Friedman helped to correctly evaluate this fact, which entered into empirical knowledge about the Universe before and independently of it. This speaks of the relative independence and value of the empirical basis of scientific and cognitive activity, interacting “on equal terms” with the theoretical level of knowledge.

6. Methods that involve working with the obtained empirical information

So far we have been talking about empirical methods that are aimed at isolating and studying real objects. Let's consider the second group of methods at this level, which involve working with received empirical information - scientific facts that need to be processed, systematized, carried out primary generalization, etc.

These methods are necessary when the researcher works in the layer of existing, acquired knowledge, no longer directly addressing the events of reality, organizing the data obtained, trying to discover regular relationships - empirical laws, and make assumptions about their existence. By their nature, these are largely “purely logical” methods, unfolding according to laws adopted primarily in logic, but at the same time included in the context of the empirical level of scientific research with the task of organizing current knowledge. At the level of ordinary simplified ideas, this stage of the initial predominantly inductive generalization of knowledge is often interpreted as the very mechanism for obtaining a theory, which shows the influence of the “all-inductivist” concept of knowledge that was widespread in past centuries.

The study of scientific facts begins with their analysis. By analysis we mean a research method consisting of the mental dissection (decomposition) of a whole or generally complex phenomenon into its component, simpler elementary parts and the identification of individual aspects, properties, and connections. But analysis is not the final goal of scientific research, which seeks to reproduce the whole, to understand its internal structure, the nature of its functioning, the laws of its development. This goal is achieved by subsequent theoretical and practical synthesis.

Synthesis is a research method consisting of connecting, reproducing the connections of the analyzed parts, elements, sides, components of a complex phenomenon and comprehending the whole in its unity. Analysis and synthesis have their objective foundations in the structure and laws of the material world itself. In objective reality, there are the whole and its parts, unity and differences, continuity and discreteness, constantly occurring processes of disintegration and connection, destruction and creation. In all sciences, analytical-synthetic activity is carried out, while in natural science it can be carried out not only mentally, but also practically.

The very transition from analysis of facts to theoretical synthesis is carried out using methods that, complementing each other and combining, constitute the content of this complex process. One of these methods is induction, which in a narrow sense is traditionally understood as a method of transition from knowledge of individual facts to knowledge of the general, to empirical generalization and the establishment of a general position that turns into a law or other essential connection. The weakness of induction lies in the lack of justification for such a transition. The enumeration of facts can never be practically completed, and we are not sure that the following fact will not be contradictory. Therefore, knowledge obtained through induction is always probabilistic. In addition, the premises of the inductive conclusion do not contain knowledge of how significant the generalizable features and properties are. Using enumeration induction, one can obtain knowledge that is not reliable, but only probable. There are also a number of other methods of generalizing empirical material, with the help of which, as in popular induction, the knowledge obtained is of a probable nature. Such methods include the method of analogies, statistical methods, and the method of model extrapolation. They differ in the degree of validity of the transition from facts to generalizations. All these methods are often combined under the general name of inductive, and then the term induction is used in a broad sense.

In the general process of scientific knowledge, inductive and deductive methods are closely intertwined. Both methods are based on the objective dialectic of the individual and the general, phenomenon and essence, random and necessary. Inductive methods are of greater importance in sciences that are directly based on experience, while deductive methods are of paramount importance in theoretical sciences as a tool for their logical ordering and construction, as methods of explanation and prediction. To process and generalize facts in scientific research, systematization as reduction into a single system and classification as division into classes, groups, types, etc. are widely used.

7. Methodological aspects

When developing methodological aspects of classification theory, methodologists propose to distinguish between the following concepts:

Classification is the division of any set into subsets according to any criteria;

Systematics is the ordering of objects, which has the status of a privileged classification system, distinguished by nature itself (natural classification);

Taxonomy is the study of any classifications from the point of view of the structure of taxa (subordinate groups of objects) and characteristics.

Classification methods make it possible to solve a number of cognitive problems: reduce the diversity of material to a relatively small number of entities (classes, types, forms, species, groups, etc.); identify the initial units of analysis and develop a system of corresponding concepts and terms; discover regularities, stable signs and relationships, and ultimately empirical patterns; summarize previous research and predict the existence of previously unknown objects or their properties, discover new connections and dependencies between already known objects. The compilation of classifications must obey the following logical requirements: the same basis must be used in the same classification; the volume of classification members must be equal to the volume of the class being classified (proportionality of division); members of the classification must be mutually exclusive, etc.

In the natural sciences, both descriptive classifications are presented, which make it possible to simply reduce the accumulated results to a convenient form, and structural classifications, which make it possible to identify and record the relationships of objects. Thus, in physics, descriptive classifications are the division of fundamental particles by charge, spin, mass, strangeness, participation in different types interactions. Some groups of particles can be classified according to types of symmetries (quark structures of particles), which reflects a deeper, essential level of relationships.

Research in recent decades has revealed methodological problems of classifications, the knowledge of which is necessary for a modern researcher and systematizer. This is, first of all, a discrepancy between the formal conditions and rules for constructing classifications and real scientific practice. The requirement for discreteness of features gives rise in a number of cases to artificial methods of dividing the whole into discrete values ​​of features; It is not always possible to make a categorical judgment about the attribute belonging to an object; when the features are multi-structured, they are limited to indicating the frequency of occurrence, etc. A widespread methodological problem is the difficulty of combining two different goals in one classification: the arrangement of material, convenient for recording and searching; identifying internal systemic relationships in the material – functional, genetic and others (research group).

An empirical law is the most developed form of probabilistic empirical knowledge, using inductive methods to fix quantitative and other dependencies obtained experimentally by comparing the facts of observation and experiment. This is its difference as a form of knowledge from a theoretical law - reliable knowledge, which is formulated using mathematical abstractions, as well as as a result of theoretical reasoning, mainly as a consequence of a thought experiment on idealized objects.

Research in recent decades has shown that theory cannot be obtained as a result of inductive generalization and systematization of facts, it does not arise as a logical consequence of facts, the mechanisms of its creation and construction are of a different nature, imply a leap, a transition to a qualitatively different level of knowledge, requiring creativity and talent of the researcher . This is confirmed, in particular, by numerous statements by A. Einstein that there is no logically necessary path from experimental data to theory; concepts that arise in the process of our thinking.

The empirical body of information provides primary information about new knowledge and many properties of the objects under study and thus serves as the initial basis for scientific research.

Empirical methods are based, as a rule, on the use of methods and techniques of experimental research that make it possible to obtain factual information about the object. A special place among them is occupied by basic methods, which are relatively often used in practical research activities.


LITERATURE

1. Korotkov E.M. Research of control systems. – M.: DEKA, 2000.

2. Lomonosov B.P., Mishin V.M. Systems research. – M.: JSC “Inform-Knowledge”, 1998.

3. Malin A.S., Mukhin V.I. Systems research. – M.: State University Higher School of Economics, 2002.

4. Mishin V.M. Systems research. – M.: UNITY-DANA, 2003.

5. Mishin V.M. Systems research. – M.: ZAO “Finstatinform”, 1998.

6. Kovalchuk V.V., Moiseev A.N. Fundamentals of scientific research. K.: Zannanya, 2005.

7. Filipenko A. S. Fundamentals of scientific research. K.: Akademvidav, 2004.

8. Grishenko I. M. Fundamentals of scientific research. K.: KNEU, 2001.

9. Ludchenko A. A. Fundamentals of scientific research. K.: Zannanya, 2001

10. Stechenko D.I., Chmir O.S. Methodology of scientific research. K.: VD “Professional”, 2005.

Methods of empirical research in science and technology include observation, comparison, measurement and experiment, among several others.

Observation is understood as a systematic and purposeful perception of an object that interests us for some reason: things, phenomena, properties, states, aspects of the whole - both material and ideal nature.

This is the simplest method, which, as a rule, acts as part of other empirical methods, although in a number of sciences it acts independently or as the main one (as in weather observation, observational astronomy, etc.). The invention of the telescope allowed man to extend observation to a previously inaccessible area of ​​the megaworld; the creation of the microscope marked an invasion of the microworld. An X-ray machine, radar, ultrasound generator and many other technical means of observation have led to an unprecedented increase in the scientific and practical value of this research method. There are also methods and techniques for self-observation and self-control (in psychology, medicine, physical education and sports, etc.).

The very concept of observation in the theory of knowledge generally appears in the form of the concept of “contemplation”; it is associated with the categories of activity and activity of the subject.

To be fruitful and productive, observation must satisfy the following requirements:-

be intentional, that is, carried out to solve well-defined problems within the framework of the general goal(s) of scientific activity and practice; -

systematic, that is, to consist of observations following a specific plan, pattern, resulting from the nature of the object, as well as the goals and objectives of the study; -

purposeful, that is, to fix the observer’s attention only on the objects that interest him and not to dwell on those that fall outside the observation tasks. Observation aimed at the perception of individual details, sides, aspects, parts of an object is called fixing, and covering the whole under the condition of repeated observation (return) - fluctuating. The combination of these types of observation ultimately gives a holistic picture of the object; -

to be active, that is, when the observer purposefully searches for objects necessary for his tasks among a certain set of them, considers individual properties and aspects of these objects that interest him, while relying on his own stock of knowledge, experience and skills; -

systematic, that is, when the observer conducts his observation continuously, and not randomly and sporadically (as with simple contemplation), according to a certain, pre-thought-out scheme, in various or strictly specified conditions.

Observation as a method of scientific knowledge and practice gives us facts in the form of a set of empirical statements about objects. These facts form primary information about the objects of cognition and study. Let us note that in reality itself there are no facts: it simply exists. Facts are in people's heads. The description of scientific facts occurs on the basis of a certain scientific language, ideas, pictures of the world, theories, hypotheses and models. It is they who determine the primary schematization of the idea of ​​a given object. Actually, it is precisely under such conditions that the “object of science” arises (which should not be confused with the object of reality itself, since the second is a theoretical description of the first!).

Many scientists specifically developed their ability to observe, that is, observation. Charles Darwin said that he owed his successes to the fact that he intensively developed this quality in himself.

Comparison is one of the most common and universal methods of cognition. A well-known aphorism: “Everything is known by comparison” - the best for that proof. Comparison is the establishment of similarities (identities) and differences between objects and phenomena of various kinds, their aspects, etc., in general, objects of study. As a result of comparison, what is common to two or more objects is established - at the moment or in their history. In sciences of a historical nature, comparison was developed to the level of the main method of research, which was called comparative historical. Identification of the general, recurring in phenomena, as is known, is a step on the path to knowledge of the natural.

In order for a comparison to be fruitful, it must satisfy two basic requirements: only such aspects and aspects, objects as a whole, between which there is an objective commonality, should be compared; the comparison should be based on the most important, significant characteristics in a given research or other task. Comparison based on unimportant characteristics can only lead to misconceptions and errors. In this regard, one must be careful when drawing conclusions “by analogy.” The French even say that “comparison is not proof!”

Objects of interest to a researcher, engineer, or designer can be compared either directly or indirectly - through a third object. In the first case, qualitative assessments of the type are obtained: more - less, lighter - darker, higher - lower, closer - further, etc. True, even here you can get the simplest quantitative characteristics: “twice as high”, “twice as heavy” and etc. When there is also a third object in the role of a standard, measure, scale, then especially valuable and more accurate quantitative characteristics are obtained. I call such a comparison through an intermediary object a measurement. The comparison also prepares the basis for a number of theoretical methods. It itself is often based on inferences by analogy, which we will discuss further.

Measurement has historically developed from observation and comparison. However, unlike simple comparison it is more efficient and accurate. Modern natural science, which began with Leonardo da Vinci, Galileo and Newton. It flourished due to the use of measurements. It was Galileo who proclaimed the principle of a quantitative approach to phenomena, according to which the description of physical phenomena should be based on quantities that have a quantitative measure - number. He said that the book of nature is written in the language of mathematics. Engineering, design and construction continue this same line in their methods. We will consider measurement here, in contrast to other authors who combine measurement with experiment, as an independent method.

Measurement is a procedure for determining the numerical value of some characteristic of an object by comparing it with a unit of measurement accepted as a standard by a given researcher or all scientists and practitioners. As is known, there are international and national units of measurement of the basic characteristics of various classes of objects, such as hour, meter, gram, volt, bit, etc.; day, pud, pound, verst, mile, etc. Measurement presupposes the presence of the following basic elements: an object of measurement, a unit of measurement, that is, a scale, measure, standard; measuring device; measurement method; observer.

Measurements can be direct or indirect. In direct measurement, the result is obtained directly from the measurement process itself (for example, using measures of length, time, weight, etc.). With indirect measurement, the desired value is determined mathematically on the basis of other values ​​previously obtained by direct measurement. This is how, for example, the specific gravity, area and volume of bodies of regular shape, speed and acceleration of the body, power, etc. are obtained.

Measurement allows us to find and formulate empirical laws and fundamental world constants. In this regard, it can serve as a source for the formation of even entire scientific theories. Thus, long-term measurements of the motion of planets by Tycho de Brahe later allowed Kepler to create generalizations in the form of the well-known three empirical laws of planetary motion. The measurement of atomic weights in chemistry was one of the foundations for Mendeleev’s formulation of his famous periodic law in chemistry, etc. Measurement provides not only accurate quantitative information about reality, but also allows us to introduce new qualitative considerations into the theory. This is what ultimately happened with Michelson’s measurement of the speed of light during the development of Einstein’s theory of relativity. The examples can be continued.

The most important indicator of the value of a measurement is its accuracy. Thanks to it, facts can be discovered that are not consistent with currently existing theories. At one time, for example, deviations in the perihelion of Mercury from the calculated value (that is, consistent with the laws of Kepler and Newton) by 13 seconds per century could only be explained by creating a new, relativistic concept of the world in the general theory of relativity.

The accuracy of measurements depends on the available instruments, their capabilities and quality, the methods used and the training of the researcher. Large amounts of money are often spent on measurements, and they are often prepared long time, many people participate in them, and the result may be either zero or inconclusive. Often, researchers are not ready for the results obtained, because they share a certain concept, theory, but it cannot include this result. Thus, at the beginning of the 20th century, the scientist Landolt very accurately tested the law of conservation of weight of substances in chemistry and became convinced of its validity. If his technique were improved (and the accuracy increased by 2 - 3 orders of magnitude), then it would be possible to derive Einstein's famous relation between mass and energy: E = mc. But would this have been convincing to the scientific world of that time? Hardly! Science was not yet ready for this. In the 20th century, when, by determining the masses of radioactive isotopes by the deflection of an ion beam, the English physicist F. Aston confirmed Einstein’s theoretical conclusion, this was perceived in science as a natural result.

Please note that there are certain requirements for the level of accuracy. It must be in accordance with the nature of the objects and with the requirements of the cognitive, design, engineering or engineering task. So, in engineering and construction they constantly deal with measuring mass (that is, weight), length (size), etc. But in most cases, precision accuracy is not required here; moreover, it would look generally funny if, say, weight support column for the building was checked to thousandths or even smaller fractions of a gram! There is also the problem of measuring bulk material associated with random deviations, as happens in large aggregates. Similar phenomena are typical for objects of the microworld, for biological, social, economic and other similar objects. The search for a statistical average and methods specifically focused on processing randomness and its distributions in the form of probabilistic methods, etc. are applicable here.

To eliminate random and systematic measurement errors, identify errors and errors associated with the nature of the instruments and the observer (human), a special mathematical theory errors.

In the 20th century, measurement methods in conditions of rapid processes, in aggressive environments where the presence of an observer is excluded, etc., acquired particular importance in connection with the development of technology. Methods of auto- and electrometry, as well as computer information processing and control of measurement processes, came to the rescue here. In their development, an outstanding role was played by the developments of scientists from the Novosibirsk Institute of Automation and Electrometry SB RAS, as well as NSTU (NETI). These were world class results.

Measurement, along with observation and comparison, is widely used at the empirical level of cognition and human activity in general; it is part of the most developed, complex and significant method - experimental.

An experiment is understood as a method of studying and transforming objects when a researcher actively influences them by creating artificial conditions necessary to identify any properties, characteristics, or aspects of interest to him, consciously changing the course of natural processes, while conducting regulation, measurements and observations. The main means of creating such conditions are a variety of instruments and artificial devices, which we will discuss below. An experiment is the most complex, comprehensive and effective method of empirical knowledge and transformation of objects of various kinds. But its essence is not in complexity, but in purposefulness, intentionality and intervention through regulation and management during the studied and transformed processes and states of objects.

Galileo is considered the founder of experimental science and the experimental method. Experience as the main path for natural science was first identified at the end of the 16th and beginning of the 17th centuries by the English philosopher Francis Bacon. Experience is the main path for engineering and technology.

The distinctive features of an experiment are the possibility of studying and transforming an object in a relatively pure form, when all the side factors that obscure the essence of the matter are eliminated almost entirely. This makes it possible to study objects of reality under extreme conditions, that is, at ultra-low and ultra-high temperatures, pressures and energies, process rates, electric and magnetic field strengths, interaction energies, etc.

Under these conditions, it is possible to obtain unexpected and surprising properties from ordinary objects and, thereby, penetrate deeper into their essence and mechanisms of transformation (extreme experiment and analysis).

Examples of phenomena discovered under extreme conditions are superfluidity and superconductivity at low temperatures. The most important advantage of an experiment is its repeatability, when observations, measurements, tests of the properties of objects are carried out repeatedly under varying conditions in order to increase the accuracy, reliability and practical significance of previously obtained results, and to verify the existence of a new phenomenon in general.

The experiment is resorted to in the following situations: -

when they try to discover previously unknown properties and characteristics of an object - this is a research experiment; -

when the correctness of certain theoretical positions, conclusions and hypotheses is checked - a theory-testing experiment; -

when the correctness of previously performed experiments is checked - a verification (for experiments) experiment; -

educational and demonstration experiment.

Any of these types of experiments can be carried out either directly with the object being examined or with its substitute - models of various kinds. Experiments of the first type are called full-scale, the second - model (simulation). Examples of experiments of the second type are studies of the hypothetical primary atmosphere of the Earth on models of a mixture of gases and water vapor. The experiments of Miller and Abelson confirmed the possibility of the formation of organic formations and compounds during electrical discharges in the model of the primary atmosphere, and this, in turn, became a test of the theory of Oparin and Haldane about the origin of life. Another example is model experiments on computers that receive all greater distribution in all sciences. In this regard, physicists today talk about the emergence of “computational physics” (computer operation is based on mathematical programs and computational operations).

The advantage of the experiment is the ability to study objects in a wider range of conditions than the original allows, which is especially noticeable in medicine, where it is impossible to conduct experiments that harm human health. Then they resort to the help of living and non-living models that repeat or imitate the characteristics of a person and his organs. Experiments can be conducted both on material-field and information objects, and with their ideal copies; in the latter case, we have a thought experiment, including a computational one, as an ideal form of a real experiment (computer simulation of an experiment).

At present, attention to sociological experiments is increasing. But there are features that limit the possibilities of such experiments in accordance with the laws and principles of humanity, which are reflected in the concepts and agreements of the UN and international law. Thus, no one except criminals will plan experimental wars, epidemics, etc. in order to study their consequences. In this regard, scenarios of a nuclear missile war and its consequences in the form of a “nuclear winter” were played out on computers here and in the USA. The conclusion from this experiment: a nuclear war will inevitably bring the death of all humanity and all life on Earth. The importance of economic experiments is great, but even here the irresponsibility and political bias of politicians can and does lead to catastrophic results.

Observations, measurements and experiments are mainly based on various instruments. What is a device in terms of its role for research? In the broad sense of the word, devices are understood to be artificial, technical means and various kinds of devices that allow us to study any phenomenon, property, state, or characteristic of interest to us from a quantitative and/or qualitative point of view, as well as create strictly defined conditions for their detection, implementation and regulation; devices that allow observation and measurement at the same time.

It is equally important to choose a reference system and create it specifically in the device. By reference systems we understand objects that are mentally accepted as initial, basic and physically at rest, motionless. This is most clearly seen when measured using different reference scales. In astronomical observations, these are the Earth, the Sun, other bodies, fixed (conditionally) stars, etc. Physicists call “laboratory” that reference system, an object that coincides with the place of observation and measurement in the spatio-temporal sense. In the instrument itself, the reference system is an important part of the measuring device, conditionally calibrated on a reference scale, where the observer records, for example, the deviation of a needle or light signal from the beginning of the scale. In digital measurement systems, we still have a reference point known to the observer based on knowledge of the features of the countable set of measurement units used here. Simple and understandable scales, for example, on rulers, watches with a dial, on most electrical and heat measuring instruments.

In the classical period of science, among the requirements for instruments were, firstly, sensitivity to the influence of an external measured factor for measuring and regulating experimental conditions; secondly, the so-called “resolution” - that is, the limits of accuracy and maintenance of specified conditions for the process being studied in an experimental device.

At the same time, it was tacitly believed that with the progress of science they would all be able to be improved and increased. In the 20th century, thanks to the development of the physics of the microworld, it was found that there is a lower limit to the divisibility of matter and field (quanta, etc.), there is a lower value of the magnitude of the electric charge, etc. All this caused a revision of previous requirements and attracted Special attention to systems of physical and other units known to everyone from a school physics course.

An important condition for the objectivity of the description of objects was also considered the fundamental possibility of abstraction, abstraction from reference systems by or choosing the so-called " natural system reference", or by discovering such properties in objects that do not depend on the choice of reference systems. In science they are called "invariants" There are not so many such invariants in nature itself: this is the weight of the hydrogen atom (and it has become a measure, a unit for measuring weight of other chemical atoms), this is an electric charge, the so-called “action” in mechanics and physics (its dimension is energy x time), the Planck quantum of action (in quantum mechanics), the gravitational constant, the speed of light, etc. At the turn of the 19th and In the 20th century, science discovered seemingly paradoxical things: mass, length, time are relative, they depend on the speed of movement of particles of matter and fields and, of course, on the position of the observer in the reference system. In the special theory of relativity, a special invariant was eventually found - " four-dimensional interval."

The importance and role of research into reference systems and invariants grew throughout the 20th century, especially in the study extreme conditions, the nature and speed of processes, such as ultra-high energies, low and ultra-low temperatures, fast processes, etc. The problem of measurement accuracy also remains important. All instruments used in science and technology can be divided into observational, measuring and experimental. There are several types and subspecies according to their purpose and functions in the study:

1. Measuring partings of various kinds with two subtypes:

a) direct measurement (rulers, measuring vessels, etc.);

b) indirect, indirect measurement (for example, pyrometers that measure body temperature by measuring radiation energy; strain gauges and sensors - pressure through electrical processes in the device itself; etc.). 2.

Strengthening the natural organs of a person, but not changing the essence and nature of the observed and measured characteristics. These include optical instruments (from glasses to a telescope), many acoustic instruments, etc. 3.

Transforming natural processes and phenomena from one type to another, accessible to the observer and/or his observation and measuring devices. These are X-ray machines, scintillation sensors, etc.

4. Experimental instruments and devices, as well as their systems, including observation and measuring instruments as an integral part. The range of such devices extends to the size of giant particle accelerators, such as Serpukhov. In them, processes and objects of various kinds are relatively isolated from the environment, they are regulated, controlled, and phenomena are isolated in the most pure form (that is, without other extraneous phenomena and processes, interference, disturbing factors, etc.).

5. Demonstration devices that serve to visually demonstrate various properties, phenomena and patterns of various kinds during teaching. These also include test benches and simulators of various kinds, since they are visual and often imitate certain phenomena, as if deceiving students.

There are also instruments and devices: a) for research purposes (for us they are the main thing here) and, b) for mass consumer use. The progress of instrument making is a concern not only for scientists, but also for designers and instrument engineers in the first place.

One can also distinguish model devices, as if a continuation of all the previous ones in the form of their substitutes, as well as reduced copies and models of real instruments and devices, natural objects. An example of models of the first kind will be cybernetic and computer simulations of real ones, which allow one to study and design real objects, often in a wide range of somewhat similar systems (in control and communications, design of systems and communications, networks of various kinds, in CAD). Examples of models of the second kind are real models of a bridge, an airplane, a dam, a beam, a car and its components, or any device.

In a broad sense, a device is not only some artificial formation, but it is also an environment in which some process takes place. The latter can also be played by a computer. Then they say that we have before us a computational experiment (when operating with numbers).

Computational experiment as a method has a great future, since often the experimenter deals with multifactorial and collective processes where enormous statistics are needed. The experimenter also deals with aggressive environments and processes that are dangerous to humans and living things in general (in connection with the latter, there are environmental problems of scientific and engineering experiments).

The development of microworld physics has shown that in our theoretical description of microworld objects we, in principle, cannot get rid of the influence of the device on the desired answer. Moreover, here we, in principle, cannot simultaneously measure the coordinates and momenta of microparticles, etc.; after the measurement, it is necessary to construct mutually complementary descriptions of the behavior of the particle due to the readings of different instruments and non-simultaneous descriptions of measurement data (W. Heisenberg's uncertainty principles and N. Bohr's complementarity principle).

Progress in instrument making often creates a genuine revolution in a particular science. Classic examples are discoveries made through the invention of the microscope, telescope, X-ray machine, spectroscope and spectrometer, the creation of satellite laboratories, the carrying of instruments into space on satellites, etc. Expenses for instruments and experiments in many research institutes often make up the lion's share of their budgets. Today there are many examples when experiments are beyond the means of entire large countries, and therefore they go for scientific cooperation (like CERN in Switzerland, in space programs, etc.).

In the course of the development of science, the role of instruments is often distorted and exaggerated. So in philosophy, in connection with the peculiarities of experiments in the microworld, as discussed just above, the idea arose that in this area all our knowledge is entirely of instrumental origin. The device, as if continuing the subject of cognition, interferes with the objective course of events. Hence the conclusion is drawn: all our knowledge about the objects of the microworld is subjective, it is of instrumental origin. As a result, a whole direction of philosophy arose in the science of the 20th century - instrumental idealism or operationalism (P. Bridgman). Of course, there was response criticism, but a similar idea is still found among scientists. In many ways, it arose due to the underestimation of theoretical knowledge and cognition, as well as its capabilities.

Any scientific knowledge is based on certain methods of cognition of reality, thanks to which branches of science receive the necessary information for processing, interpretation, and construction of theories. Each individual industry has its own specific set of research methods. But in general, they are the same for everyone and, in fact, their application is what distinguishes science from pseudoscience.

Empirical research methods, their features and types

One of the most ancient and widely used are empirical methods. In the ancient world there were empiricist philosophers who learned about the world around them through sensual, sensory perception. This is where research methods were born, which literally means “perception by the senses.”

Empirical methods in psychology are considered the main and most accurate. In general, in the study of the characteristics of a person’s mental development, two main methods can be used: a cross-section, which includes empirical research, and a longitudinal, so-called long study, when one person is the object of study over a large period of time, and when in this way the characteristics of his personal personality are revealed. development.

Empirical methods of cognition involve observation of phenomena, their recording and classification, as well as the establishment of relationships and patterns. They consist of various experimental laboratory studies, psychodiagnostic procedures, biographical descriptions and have existed in psychology since the 19th century, ever since it began to stand out as a separate branch of knowledge from other social sciences.

Observation

Observation as a method of empirical research in psychology exists in the form of introspection (introspection) - subjective knowledge of one’s own psyche, and in objective external surveillance. Moreover, both occur indirectly, through external manifestations of mental processes in various forms of activity and behavior.

Unlike everyday observation, scientific observation must meet certain requirements and a well-established methodology. First of all, its tasks and goals are determined, then an object, subject and situation are selected, as well as methods that will ensure the most full information. In addition, the observation results are recorded and then interpreted by the researcher.

Various forms of observation are certainly interesting and indispensable, especially when it is necessary to create the most general picture of people’s behavior in natural conditions and situations where the intervention of a psychologist is not required. However, there are also certain difficulties in interpreting phenomena related to the personal characteristics of the observer.

Experiment

In addition, empirical methods such as laboratory experiments are also often used. They differ in that they study cause-and-effect relationships in an artificially created environment. In this case, the experimental psychologist not only models a specific situation, but actively influences it, changes it, and varies the conditions. Moreover, the created model can be repeated several times, and accordingly, the results obtained during the experiment can be repeatedly reproduced. Experimental empirical methods make it possible to study internal mental processes with the help of external manifestations in an artificially created situational model. There is also such a type of experiment in science as a natural experiment. It is carried out in natural conditions or in the closest to them. Another form of the method is a formative experiment, which is used to form and change human psychology, while simultaneously studying it.

Psychodiagnostics

Empirical methods of psychodiagnostics aim to describe and record personalities, similarities and differences between people using standardized questionnaires, tests and questionnaires.

The listed main methods of empirical research in psychology, as a rule, are used comprehensively. Complementing each other, they help to better understand the characteristics of the psyche and discover new aspects of personality.