Testing personality abilities. Examples of verbal and numerical tests Functional types of testing

Dear friends!

  • If you are soon to take “Verbal-Numeric” testing SHL, Talent-Q, Ontarget Genesys,
  • If you are afraid of failing and are looking for how to prepare
  • If there is little time left,

then I hasten to inform you that it is possible to prepare online professionally.

Quickly and simply, using effective online training, you will train your skills in 2-3 days and pass the tests the first time! A stable skill appears after solving 30-40 tests.

Listen to the 6-minute interview immediately after testing and training in our system.

In the interview, we talked about the online program Roboxtest V.8, which is a platform for versions of MAXIMUM 875, BIG4, FMCG, NGK.

Our team has developed a unique computer program Roboxtest V.8. It is as close as possible to real testing - the process itself takes place directly in the browser, with a limited amount of time. I invite you to take a trial verbal test and see everything with your own eyes. A complete database of tests (more than 100 tests at the moment - about 1500 questions) is also available. To do this, contact me. Contacts are below.

All preparation will take place online. Each test has correct answers and solutions without a time limit. The program works directly from the Google Chrome, FireFox, Mozilla, Safari browser.

Attention! At the moment, the program is not compatible with Internet Explorer (not all the functionality).

(Works with Google Chrome, FireFox, Mozilla, Safari.)

At the end you will see a report similar to what employers see - in percentages and percentiles. This will allow you to soberly assess your strengths. Since the preparation takes place online, it is possible to compare yourself with other people who have taken this test - this is important, because this is how employers look at you.

The system will also identify your strengths and weaknesses and tell you what to pay more attention to.

Now the database contains more than a hundred different tests (more than 1,500 questions) - mainly ability tests - verbal, numerical, abstract-logical. But, most likely, you are not solving the entire database. Something else is important here - skill.

As experience shows, it is enough to achieve a level of 80-90 percent and at least 60 percentiles for each type of test in order to successfully pass real testing the first time.

People who prepared using our system solved an average of 30-40 tests. Here again, individually, one candidate really wanted to get the position - he solved 152 tests!!! And I passed the real test successfully!!!

There are also knowledge tests - English - 2 levels, RAS, IFRS - for preparation for the Big Four.

If you are interested in training in our system, please contact me. Without payment, the system will block you within a few hours after registration.

Sincerely, Panteleev Stanislav.

[email protected]

The tasks you will solve cannot be called difficult. These are not matrices and integrals, not complex mathematical logic. The purpose of testing is to measure your psychometric data.

The whole difficulty lies only in the time that you are given and the passing scores that are assigned by employers and testing companies. You don’t know anything about them, and you don’t know about the types of tasks. In this article we will lift the veil of secrecy and show you what will await you during testing.

An example of a testing company report for an employer

Your results will be compared with those of other candidates. This is what the test result looks like for an employer using the example of testing the Talent-Q system.

Draw conclusions. You will be compared with a normative group and, based on these results, you will be invited for an interview or not.

To get an interview, be sure to train hard! See what types of tasks exist, find your own materials or use ours. The formula here is simple: “Training = Success”

Numerical test and its varieties. Examples and solutions

Example of a problem with a graph

How many thousand more cars were imported in the second quarter of the second year than in the first year for the same period?
Solution:
The graph shows that in the second quarter of the second year, 600 thousand cars were imported, in the second quarter of the first year – 425 thousand.
We calculate the difference 600-425=175 thousand cars
Answer:
175 thousand cars

Example problem with diagram

It is no secret that the level of power of the financial system of any country is assessed based on the size of its gold and foreign exchange reserves. Of course, the greater the reserves, the higher the level of economic stability to various financial shocks.
The charts below show the change in the size of such reserves (in billions of US dollars) for the world's five largest economies: China, the United States, Japan, the European Union collectively (EU), and the Russian Federation. Note that the data reviewed relate to the period 2010-2013.

How many times are China's gold and foreign exchange reserves in 2010 greater than Russia's in 2011?

Solution:

The gold and foreign exchange reserves of China in 2010 amounted to $2,000 billion, the Russian Federation in 2011 - $400 billion.

Answer:

Example of a problem with a table
At the 2004 Olympic Games, athletes from five countries won the most gold, silver and bronze medals: the USA, China, Russia, Australia, and Japan. Question: How many gold medals did the Russian team lack to take first place in the team standings in terms of the number of gold medals not counting silver?

Comment: places in the standings are distributed according to the total amount of awards

Solution:

For Russia to take first place in gold medals, it needs to overtake the United States and collect 36 medals. That is, we lacked 36-27 = 9 medals

Answer:

Example of a problem involving percentages

In January 2012, the price of a men's suit increased by 25%, and in March 2013, at a sale, it became 16% less than the price raised and currently stands at $336. By what percentage overall did the price of a suit fall or rise during the period mentioned above?

Solution:

Let us denote by x the initial price.

Then the price in January 2012 was 1.25*x;

Price in March (1-0.16)*1.25*x=$336

1.05*x=$336

Answer:

The price rose by 5%.

Example of a mixture problem

From two salt solutions - 10 percent and 15 percent, you need to create 40 grams of a 12 percent solution. How many grams of each solution should I take?

Solution:

Let us denote by x the mass of the 10% solution, and by y the mass of the 15% solution.

Then we can create 2 equations:

The total mass of the solution is 40 grams, that is

The following equation will determine the salt content of solutions:

0.1x+0.15y=0.12*40

So, we have a system of 2 equations. We express x from the first equation and substitute it into the second.

0.1*(40-y)+0.15y=4.8

4-0.1y+0.15y=4.8

Answer:

10 percent 24 grams, 15 percent 16 grams.

Verbal logic test. Example and solution.

Example of a verbal logic task

There is an international classification of diseases (ICD-10) accepted throughout the world; it includes hundreds of different diseases. Many psychiatrists from different countries around the world (for example, American doctor Kimberly Young) demand the inclusion of cyber addiction (computer addiction) as a disease in the next edition of the ICD. At the moment, the closest existing diagnosis is gaming addiction, but the description of this disease refers exclusively to the use of slot machines, there is no talk about personal computers.

Question 1: Cyber ​​addiction is a disease recognized throughout the world.

Answer: False.

Explanation: since psychiatrists from different countries demand the inclusion of cyber addiction in the next edition of the ICD, we can conclude that this disease is not yet recognized throughout the world.

The story of Stanislav Panteleev. Tests at P&G

I will tell you about my experience, and you yourself will draw conclusions from this. In 2008, I graduated from the Ural State University with a degree in economics and management with a specialization in anti-crisis management. In our final courses, we had a powerful ad from the Big Four (E&Y KPMG Deloitte PwC). Many from my course went to work there. 90% left within the first year. I chose a different path for myself - sales. The first company I went to was P&G. I filled out the form in the Taleo system, uploaded my resume, waited for the call, and now I’m testing at the P&G branch in Yekaterinburg. The first impression is that the tasks are easy, but time flows inexorably. There were three of us candidates for the Sales position at P&G. I worked through everything carefully and was stuck on some tasks. I remember there was some problem about how many and what kinds of objects of different sizes would fit into a warehouse - I sat on it for about 10 minutes and realized that I couldn’t solve it. At one point, my rivals asked me, “Will you have time?” I said that I would have time, but for the rest of the time I was just running around with answers from the idiot. The results were in 20 minutes. Stanislav "No." I was very upset then. I’ve never had problems with such simple tasks, but here I’m going to fail and ruin my career. A few days later I came back to life and realized this simple thought - I’ll find textbooks, download tests, and start preparing. But it was not there. There were virtually no online tutorials on how to prepare for such seemingly simple tasks. Tests too. As a result, there are scarce resources for preparation. And my career meant a lot to me at that time. This includes money, professional growth and social significance. There was a resource from Vadim Tikhonov, but I didn’t want to pay for tests at that moment. It seemed to me that everything could be downloaded. As a result, I spent a lot of time and began to compose my tasks based on what I remembered and what else I came across. I started asking my friends and acquaintances who also faced this problem. This is how I met Marina Tarasova, who greatly helped me in my preparation. At that time, she already had extensive experience in developing tests for assessing and qualifying personnel, including training tests for admission to international companies. Next were the companies Mars, KPMG, E&Y, Unilever. Everywhere I passed these tests with a bang! It was only necessary to master the principle. The training helped me, and it will help you too. Our tests are paid because we spend a lot of work to create them - work for the result. You have probably encountered the fact that there is very little information on the topic of preparing for testing. We are filling this gap. A lot of new things come from you, dear clients and readers. Every month we update the tests in accordance with new information and trends in the candidate testing market. These include new tasks, new types of tasks, examples of solutions and other updates. As a result, we have created a small but very useful resource for your test preparation. We are ready to hear your wishes, comments and reviews on our website. To do this, contact a “consultant” and we will contact you.

We'll tell you what SHL tests are and show with examples how they help in HR work. Here are examples of all types of SHL tests with answers.

From this article you will learn

Psychometric tests:

What are SHL tests

Psychometric SHL tests are a recruiting tool that allows you to weed out unsuitable candidates before the interview. SHL tests do not test applicants' knowledge, but rather assess their intellectual abilities. According to statistics, after passing the SHL tests, 70-80% of applicants reach an interview.

3 types of SHL tests

1. Verbal SHL tests

The verbal test is a text fragment on a certain topic, usually related to the future activities of the applicant. The text may contain complex structures, terms and special expressions.

2-3 statements are given to the text. You need to rate them on the following scale: “True”, “False” and “Little information”.

2. Mathematical (numerical) SHL tests

Such tests involve solving mathematical problems of varying levels of complexity. They do not specify integrals, derivatives and systems of equations, but, nevertheless, the task requires the analysis of a large amount of data under limited time conditions.

3. Logical SHL tests

They are also called abstract reasoning tests, induction tests, or diagram tests. Logical test tasks are given in the form of statements, a set of abstract figures, numerical sequences or diagrams. The candidate must find a pattern and answer the question or choose the correct option.

What are SHL tests for?

SHL tests test how quickly a candidate can think, analyze information, concentrate, and whether he can think logically.

What can be checked using SHL tests

Using SHL tests, you can assess the level of development of various types of abilities: the ability to abstract thinking, processing numerical and verbal information, understanding the principles of mechanics and a number of others.

  1. Verbal SHL tests

They allow you to determine how quickly the candidate perceives the text, understands logical connections and evaluates the proposed statements. There is a "catch" in these tests - the answer is " Not enough information" is often confused with the answer " False" Only a truly competent specialist can assess the difference.

  1. Numerical SHL tests

With their help, they test the candidate’s ability to “see” numbers - quickly solve fractions, search for the unknown, or determine percentages. Numerical tests measure a candidate's ability to understand graphical and tabular information.

  1. Logical SHL tests

They allow HR to determine the candidate’s ability to perceive unfamiliar information and make the right decision. Applicants who successfully pass the logic test usually have good analytical and abstract thinking and have an increased interest in learning.

  1. Use versatile tests in one set, this will allow you to determine different types of human thinking.
  2. Pay attention to whether the applicant notices subtle inscriptions in the task, use them to turn the entire course of the solution around.
  3. Assess the candidate’s ability to perceive a large amount of information, deliberately overload the text with data that is important to remember.
  4. Analyze how quickly fatigue accumulates and stupor sets in in a candidate’s work; to do this, increase the number of tasks.
  5. Can the test taker competently filter out unnecessary data that interferes with the search for a solution? Determine this parameter using a lot of extra data in the original problem statement.
  6. Use problems with complex logical connections when the correct answer requires building a long logical chain.
  7. Determine to what extent the candidate speaks specific vocabulary, design tests so that without knowledge of the terminology, many tasks will be incomprehensible.

Examples of SHL tests

Example of verbal SHL test

Initial condition:

During the summer, when full-time employees go on vacation, some organizations take on temporary work for students. At the same time of year, the workload in many companies increases, and the need for additional personnel arises. Temporary employment attracts students with the opportunity to acquire practical skills and get a job in this company after training. The company is also interested in the influx of new labor. She tries to interest students and motivate them to continue collaborating. Students are not entitled to sick leave or paid leave, but their work is paid in full.

Statement 1: Students hired for temporary work receive vacation pay in the form of additional payments to their salary.

Correct answer: False.

Statement 2: The work of staff members on leave may be performed by students.

Correct answer: Right.

Statement 3: The grievance and disciplinary process applies to students in the same way as to staff members.

Correct answer: Not enough information.

Example of a numerical SHL test

Task: “Working together, Tom, Harry and Dick will paint a 100-meter fence in 9 hours. Alone, Tom will paint the fence in 18 hours, and Harry - in 36. How long will it take Dick to paint the fence if Tom and Harry take a day off.”

Correct answer- 36 hours.

Logical SHL Test Example

The candidate is offered a sequence of drawings, one of them is missing. You need to choose the one you missed from the options below:

Correct answer: second drawing

There are a variety of combinations, and it can be difficult to find an addiction, especially in a short time. Often such tasks are solved intuitively by high-class specialists, which allows you to quickly identify the candidate you need.

How to analyze SHL test results

When analyzing the results of SHL tests, it is important to rely on the quality of the answers, rather than the quantity of answer data. For example, the first applicant filled out the answer columns for all 50 questions, but only 25 of them were correct. And the second candidate answered only 25 questions out of 50 and gave the correct answer to all of them. The result of the second candidate will be preferable and more valuable, since it eliminates the possibility of accidentally guessing the correct answer by simply checking a box in the test.

A question left unanswered in any test block should be regarded as incorrect. Analyze tests of future employees in terms of strengths and weaknesses. To do this, an HR specialist needs to clearly know what qualities a candidate for a particular position should have.

Verbal tests make it possible to assess the speed of assimilation of textual information and the ability to think logically. When passing a verbal test, in addition to the correct answer, it is worth assessing the mastery of fast reading techniques and giving the candidate an additional plus for it.

When conducting abstract logical tests, it is necessary to analyze the ability of applicants to make logical conclusions based on non-verbal information, usually presented in the form of abstract symbols.

Gennadii_M March 17, 2016 at 2:52 pm

Testing. Fundamental theory

  • IT systems testing
  • Tutorial

I recently had an interview at Middle QA for a project that clearly exceeds my capabilities. I spent a lot of time on something I didn’t know at all and little time repeating a simple theory, but in vain.

Below are the basics to review before interview for Trainee and Junior: Definition of Testing, quality, verification/validation, goals, stages, test plan, test plan points, test design, test design techniques, traceability matrix, test case, checklist, defect, error/deffect/failure, bug report, severity vs priority, testing levels, types / types, integration testing approaches, testing principles, static and dynamic testing, exploratory / ad-hoc testing, requirements, bug life cycle, software development stages, decision table, qa/qc/test engineer, connection diagram.

All comments, corrections and additions are very welcome.

Software testing- checking the correspondence between the actual and expected behavior of the program, carried out on a finite set of tests selected in a certain way. In a broader sense, testing is one of the quality control techniques that includes the activities of work planning (Test Management), test design (Test Design), testing execution (Test Execution) and analysis of the results (Test Analysis).

Software Quality is a set of characteristics of software related to its ability to satisfy stated and anticipated needs.

Verification is the process of evaluating a system or its components to determine whether the results of the current development stage satisfy the conditions formed at the beginning of this stage. Those. whether our goals, deadlines, and project development tasks defined at the beginning of the current phase are being met.
Validation- this is a determination of whether the software being developed meets the user's expectations and needs, and system requirements.
You can also find another interpretation:
The process of assessing a product's compliance with explicit requirements (specifications) is verification, while at the same time assessing the product's compliance with user expectations and requirements is validation. You can also often find the following definition of these concepts:
Validation - ‘is this the right specification?’.
Verification - ‘is the system correct to specification?’.

Test Goals
Increase the likelihood that the application intended for testing will work correctly under all circumstances.
Increase the likelihood that the application being tested will meet all of the described requirements.
Providing up-to-date information about the current state of the product.

Testing stages:
1. Product analysis
2. Working with requirements
3. Development of a testing strategy
and planning quality control procedures
4. Creation of test documentation
5. Prototype testing
6. Basic testing
7. Stabilization
8. Operation

Test Plan- this is a document that describes the entire scope of testing work, starting from a description of the object, strategy, schedule, criteria for starting and ending testing, to the equipment required in the process, special knowledge, as well as risk assessment with options for their resolution.
Answers the questions:
What should be tested?
What will you test?
How will you test?
When will you test?
Criteria for starting testing.
Test completion criteria.

Main points of the test plan
The IEEE 829 standard lists the points that a test plan should (may) consist of:
a) Test plan identifier;
b) Introduction;
c) Test items;
d) Features to be tested;
e) Features not to be tested;
f) Approach;
g) Item pass/fail criteria;
h) Suspension criteria and resumption requirements;
i) Test deliverables;
j) Testing tasks;
k) Environmental needs;
l) Responsibilities;
m) Staffing and training needs;
n) Schedule;
o) Risks and contingencies;
p)Approvals.

Test design– this is the stage of the software testing process at which test scenarios (test cases) are designed and created in accordance with previously defined quality criteria and testing goals.
Roles responsible for test design:
Test analyst - determines “WHAT to test?”
Test designer - determines “HOW to test?”

Test design techniques

Equivalence Partitioning (EP). As an example, if you have a range of valid values ​​from 1 to 10, you must choose one correct value inside the interval, say 5, and one incorrect value outside the interval, 0.

Boundary Value Analysis (BVA). If we take the example above, we will select the minimum and maximum limits (1 and 10) as values ​​for positive testing, and values ​​greater and less than the limits (0 and 11). Boundary value analysis can be applied to fields, records, files, or any kind of constrained entity.

Cause/Effect - CE. This is, as a rule, entering combinations of conditions (reasons) to obtain a response from the system (Effect). For example, you are testing the ability to add a customer using a specific display. To do this, you will need to enter several fields such as “Name”, “Address”, “Phone Number” and then click the “Add” button - this is “Reason”. After clicking the “Add” button, the system adds the client to the database and shows his number on the screen - this is “Investigation”.

Error Guessing (EG). This is when the tester uses his knowledge of the system and ability to interpret the specification to “predict” under what input conditions the system might throw an error. For example, the specification says "the user must enter a code." The tester will think: “What if I don’t enter the code?”, “What if I enter the wrong code? ", and so on. This is the prediction of error.

Exhaustive Testing (ET)- this is an extreme case. Within this technique, you should test all possible combinations of input values, and in principle, this should find all problems. In practice, the use of this method is not possible due to the huge number of input values.

Pairwise Testing is a technique for generating test data sets. The essence can be formulated, for example, like this: the formation of data sets in which each tested value of each of the tested parameters is combined at least once with each tested value of all other tested parameters.

Let's say some value (tax) for a person is calculated based on his gender, age and presence of children - we get three input parameters, for each of which we select values ​​in some way for testing. For example: gender - male or female; age - up to 25, from 25 to 60, over 60; having children - yes or no. To check the correctness of the calculations, you can, of course, go through all combinations of values ​​of all parameters:

floor age children
1 man up to 25 have no children
2 woman up to 25 have no children
3 man 25-60 have no children
4 woman 25-60 have no children
5 man over 60 have no children
6 woman over 60 have no children
7 man up to 25 Do you have children
8 woman up to 25 Do you have children
9 man 25-60 Do you have children
10 woman 25-60 Do you have children
11 man over 60 Do you have children
12 woman over 60 Do you have children

Or you might decide that we don't want combinations of all parameter values ​​with all, but just want to make sure that we check all unique pairs of parameter values. That is, for example, in terms of gender and age parameters, we want to make sure that we accurately check a man under 25, a man between 25 and 60, a man after 60, as well as a woman under 25, a woman between 25 and 60, and so on. woman after 60. And exactly the same for all other pairs of parameters. And this way we can get much smaller sets of values ​​(they have all pairs of values, although some twice):

floor age children
1 man up to 25 have no children
2 woman up to 25 Do you have children
3 man 25-60 Do you have children
4 woman 25-60 have no children
5 man over 60 have no children
6 woman over 60 Do you have children

This approach is roughly the essence of the pairwise testing technique - we do not test all combinations of all values, but we test all pairs of values.

Traceability matrix - Requirements compliance matrix is a two-dimensional table containing the correspondence between the functional requirements of the product and prepared test cases. The table column headings contain requirements, and the row headings contain test scenarios. At the intersection there is a mark indicating that the requirement of the current column is covered by the test case of the current row.
The requirements compliance matrix is ​​used by QA engineers to validate product test coverage. MCT is an integral part of the test plan.

Test Case is an artifact that describes a set of steps, specific conditions and parameters necessary to check the implementation of the function under test or its part.
Example:
Action Expected Result Test Result
(passed/failed/blocked)
Open page “login” Login page is opened Passed

Each test case must have 3 parts:
PreConditions A list of actions that bring the system to a condition suitable for basic testing. Or a list of conditions, the fulfillment of which indicates that the system is in a state suitable for conducting the main test.
Test Case Description A list of actions that transfer the system from one state to another to obtain a result on the basis of which it can be concluded that the implementation satisfies the requirements
PostConditions List of actions that transfer the system to the initial state (state before the test - initial state)
Types of Test Scripts:
Test cases are divided according to the expected result into positive and negative:
A positive test case uses only correct data and verifies that the application correctly executed the called function.
A negative test case operates with both correct and incorrect data (at least 1 incorrect parameter) and aims to check for exceptional situations (validators are triggered), and also check that the function called by the application is not executed when the validator is triggered.

Check list is a document that describes what should be tested. At the same time, the checklist can be of completely different levels of detail. How detailed the checklist will be depends on reporting requirements, the level of product knowledge of employees and the complexity of the product.
As a rule, a checklist contains only actions (steps), without the expected result. The checklist is less formalized than the test script. It is appropriate to use it when test scripts are redundant. Checklists are also associated with flexible approaches to testing.

Defect (aka bug) is a discrepancy between the actual result of program execution and the expected result. Defects are discovered during the software testing stage, when the tester compares the results of the program (component or design) with the expected result described in the requirements specification.

Error- user error, that is, he tries to use the program in a different way.
Example - enters letters into fields where you need to enter numbers (age, quantity of goods, etc.).
A high-quality program provides for such situations and displays an error message with a red cross.
Bug (defect)- an error by the programmer (or designer or anyone else who takes part in the development), that is, when something in the program does not go as planned and the program gets out of control. For example, when user input is not controlled in any way, as a result, incorrect data causes crashes or other “joys” in the operation of the program. Or the program is built internally in such a way that it initially does not correspond to what is expected of it.
Failure- a failure (and not necessarily a hardware one) in the operation of a component, an entire program or system. That is, there are defects that lead to failures (A defect caused the failure) and there are those that do not. UI defects for example. But a hardware failure that has nothing to do with software is also a failure.

Bug Report is a document describing the situation or sequence of actions that led to the incorrect operation of the test object, indicating the reasons and the expected result.
A cap
Short description (Summary) A short description of the problem, clearly indicating the cause and type of error situation.
Project Name of the project being tested
Application Component (Component) The name of the part or function of the product being tested
Version number The version on which the error was found
Severity The most common five-level system for grading the severity of a defect is:
S1 Blocker
S2 Critical
S3 Major
S4 Minor
S5 Trivial
Priority The priority of the defect:
P1 High
P2 Medium
P3 Low
Status The status of the bug. Depends on the procedure used and the bug workflow and life cycle

Author (Author) Bug report creator
Assigned To The name of the person assigned to the problem.
Environment
OS / Service Pack, etc. / Browser + version /… Information about the environment in which the bug was found: operating system, service pack, for WEB testing - browser name and version, etc.

Description
Steps to Reproduce Steps by which you can easily reproduce the situation that led to the error.
Actual Result The result obtained after going through the steps to reproduce
Expected Result Expected correct result
Add-ons
Attachment A log file, screenshot or any other document that can help clarify the cause of the error or indicate a way to solve the problem

Severity vs Priority
Severity is an attribute that characterizes the impact of a defect on the performance of an application.
Priority is an attribute that indicates the priority of performing a task or eliminating a defect. We can say that this is a work planning manager's tool. The higher the priority, the faster the defect needs to be fixed.
Severity is exposed by the tester
Priority – manager, team lead or customer

Gradation of Defect Severity (Severity)

S1 Blocker
A blocking error that renders the application inoperative, making further work with the system under test or its key functions impossible. Solving the problem is necessary for the further functioning of the system.

S2 Critical
A critical error, a malfunctioning key business logic, a hole in the security system, a problem that led to a temporary crash of the server or rendered some part of the system inoperative, without the possibility of solving the problem using other entry points. Solving the problem is necessary for further work with key functions of the system under test.

S3 Major
A significant error, part of the main business logic does not work correctly. The error is not critical or it is possible to work with the function under test using other input points.

S4 Minor
A minor error that does not violate the business logic of the part of the application being tested, an obvious user interface problem.

S5 Trivial
A trivial error that does not affect the business logic of the application, a poorly reproducible problem that is hardly noticeable through the user interface, a problem with third-party libraries or services, a problem that does not have any impact on the overall quality of the product.

Gradation of Defect Priority (Priority)
P1 High
The error must be corrected as quickly as possible, because... its presence is critical for the project.
P2 Medium
The error must be corrected; its presence is not critical, but requires a mandatory solution.
P3 Low
The error must be corrected; its presence is not critical and does not require an urgent solution.

Testing Levels

1. Unit Testing
Component (unit) testing checks functionality and looks for defects in parts of the application that are accessible and can be tested separately (program modules, objects, classes, functions, etc.).

2. Integration Testing
The interaction between system components is checked after component testing.

3. System Testing
The main objective of system testing is to verify both functional and non-functional requirements in the system as a whole. This identifies defects such as incorrect use of system resources, unintended combinations of user-level data, incompatibility with the environment, unintended use cases, missing or incorrect functionality, inconvenience of use, etc.

4. Operational testing (Release Testing).
Even if a system meets all requirements, it is important to ensure that it meets the needs of the user and fulfills its role in its operating environment as defined in the system's business model. It should be taken into account that the business model may contain errors. This is why it is so important to conduct operational testing as the final validation step. In addition, testing in the operating environment allows us to identify non-functional problems, such as: conflicts with other systems related to the business area or in software and electronic environments; insufficient system performance in the operating environment, etc. Obviously, finding such things at the implementation stage is a critical and expensive problem. That is why it is so important to carry out not only verification, but also validation, from the earliest stages of software development.

5. Acceptance Testing
A formal testing process that verifies that a system meets requirements and is conducted to:
determining whether the system meets acceptance criteria;
making a decision by the customer or other authorized person whether the application is accepted or not.

Types/types of testing

Functional types of testing

Functional testing
GUI Testing
Security and Access Control Testing
Interoperability Testing

Non-functional types of testing

All types of performance testing:
o load testing (Performance and Load Testing)
o Stress Testing
o Stability / Reliability Testing
o Volume Testing
Installation testing
Usability Testing
Failover and Recovery Testing
Configuration Testing

Change-Related Types of Testing

Smoke Testing
Regression Testing
Re-testing
Build Verification Test
Sanity Testing

Functional testing considers pre-specified behavior and is based on an analysis of the specifications of the functionality of the component or the system as a whole.

GUI Testing- functional check of the interface for compliance with the requirements - size, font, color, consistent behavior.

Security testing is a testing strategy used to check the security of the system, as well as to analyze the risks associated with providing a holistic approach to protecting the application, attacks by hackers, viruses, unauthorized access to confidential data.

Interoperability Testing is functional testing that tests the ability of an application to interact with one or more components or systems and includes compatibility testing and integration testing

Stress Testing- this is automated testing that simulates the work of a certain number of business users on some common (shared) resource.

Stress Testing allows you to check how efficient the application and the system as a whole are under stress and also evaluate the system’s ability to regenerate, i.e. to return to normal after the cessation of stress. Stress in this context can be an increase in the intensity of operations to very high values ​​or an emergency change in the server configuration. Also, one of the tasks of stress testing may be to assess performance degradation, so the goals of stress testing may overlap with the goals of performance testing.

Volume Testing. The purpose of volume testing is to obtain an assessment of performance as the volume of data in the application database increases

Stability / Reliability Testing. The task of stability (reliability) testing is to check the functionality of the application during long-term (many hours) testing with an average load level.

Testing the installation aimed at verifying successful installation and configuration, as well as updating or uninstalling software.

Usability testing is a testing method aimed at establishing the degree of usability, learnability, understandability and attractiveness for users of the product being developed in the context of given conditions. This also includes:
User eXperience (UX) is the feeling experienced by the user while using a digital product, while User interface is a tool that allows user-web resource interaction.

Failover and Recovery Testing tests the product under test in terms of its ability to withstand and successfully recover from possible failures resulting from software errors, hardware failures, or communications problems (for example, network failure). The purpose of this type of testing is to test recovery systems (or systems duplicating the main functionality), which, in the event of failures, will ensure the safety and integrity of the data of the product being tested.

Configuration Testing- a special type of testing aimed at checking the operation of the software under different system configurations (declared platforms, supported drivers, different computer configurations, etc.)

Smoke testing is considered as a short cycle of tests performed to confirm that after building the code (new or fixed), the installed application starts and performs basic functions.

Regression testing is a type of testing aimed at verifying changes made to an application or environment (fixing a defect, merging code, migrating to another operating system, database, web server or application server), to confirm the fact that pre-existing functionality works as intended before. Regression tests can be both functional and non-functional tests.

Retesting- testing, during which test scripts that identified errors during the last run are executed to confirm the success of correcting these errors.
What is the difference between regression testing and re-testing?
Re-testing - bug fixes are checked
Regression testing - checks that bug fixes, as well as any changes in the application code, do not affect other software modules and do not cause new bugs.

Assembly testing or Build Verification Test- testing aimed at determining compliance of the released version with quality criteria to begin testing. In terms of its goals, it is analogous to Smoke Testing, aimed at accepting a new version for further testing or operation. It can penetrate deeper, depending on the quality requirements of the released version.

Sanitary testing- this is narrowly focused testing sufficient to prove that a specific function works according to the requirements stated in the specification. It is a subset of regression testing. Used to determine the performance of a certain part of the application after changes made to it or the environment. Usually done manually.

Integration testing approaches:
Bottom Up Integration
All low-level modules, procedures or functions are collected together and then tested. After which the next level of modules is assembled for integration testing. This approach is considered useful if all or almost all modules of the level being developed are ready. This approach also helps determine the level of application readiness based on testing results.
Top Down Integration
First, all high-level modules are tested, and gradually low-level ones are added one by one. All lower-level modules are simulated as stubs with similar functionality, then when ready, they are replaced with real active components. This way we test from top to bottom.
Big Bang (“Big Bang” Integration)
All or almost all of the developed modules are assembled together as a complete system or its main part, and then integration testing is carried out. This approach is very good for saving time. However, if the test cases and their results are not recorded correctly, then the integration process itself will be greatly complicated, which will become an obstacle for the testing team in achieving the main goal of integration testing.

Testing principles

Principle 1– Testing shows presence of defects
Testing can show that defects are present, but cannot prove that they are not present. Testing reduces the likelihood of defects in the software, but even if no defects are found, this does not prove its correctness.

Principle 2– Exhaustive testing is impossible
Complete testing using all combinations of inputs and preconditions is physically infeasible except in trivial cases. Instead of exhaustive testing, risk analysis and prioritization should be used to better focus testing efforts.

Principle 3– Early testing
To find defects as early as possible, testing activities should be started as early as possible in the software or system development life cycle, and should be focused on specific goals.

Principle 4– Defects clustering
Testing efforts should be concentrated in proportion to the expected, and later the actual, module defect density. As a rule, most of the defects discovered during testing or that caused the majority of system failures are contained in a small number of modules.

Principle 5– Pesticide paradox
If the same tests are run over and over again, eventually this set of test cases will no longer find new defects. To overcome this “pesticide paradox”, test cases must be regularly reviewed and adjusted, new tests must be comprehensive to cover all software components,
or system, and find as many defects as possible.

Principle 6– Testing is concept depending
Testing is done differently depending on the context. For example, security-critical software is tested differently than an e-commerce site.
Principle 7– Absence-of-errors fallacy
Finding and fixing defects will not help if the created system does not suit the user and does not meet his expectations and needs.

Static and dynamic testing
Static testing differs from dynamic testing in that it is performed without running the product code. Testing is carried out by analyzing the program code (code review) or compiled code. The analysis can be done either manually or using special tools. The purpose of the analysis is to early identify errors and potential problems in the product. Static testing also includes testing specifications and other documentation.

Exploratory/ad-hoc testing
The simplest definition of exploratory testing is designing and running tests at the same time. Which is the opposite of the scenario approach (with its predefined testing procedures, whether manual or automated). Exploratory tests, unlike scenario tests, are not predetermined and are not executed exactly as planned.

The difference between ad hoc and exploratory testing is that theoretically, ad hoc testing can be carried out by anyone, while exploratory testing requires skill and knowledge of certain techniques. Please note that certain techniques are not just testing techniques.

Requirements is a specification (description) of what should be implemented.
Requirements describe what needs to be implemented, without detailing the technical side of the solution. What, not how.

Requirements Requirements:
Correctness
Unambiguity
Completeness of the set of requirements
Consistency of a set of requirements
Verifiability (testability)
Traceability
Understandability

Bug life cycle

Software development stages- these are the stages that software development teams go through before the program becomes available to a wide range of users. Software development begins with the initial development stage (pre-alpha stage) and continues with stages in which the product is refined and upgraded. The final stage of this process is the release of the final version of the software to the market (“generally available release”).

The software product goes through the following stages:
analysis of project requirements;
design;
implementation;
product testing;
implementation and support.

Each stage of software development is assigned a specific serial number. Also, each stage has its own name, which characterizes the readiness of the product at this stage.

Software development life cycle:
Pre-alpha
Alpha
Beta
Release candidate
Release
Post release

Decision table– an excellent tool for organizing complex business requirements that must be implemented in a product. Decision tables present a set of conditions, the simultaneous fulfillment of which should lead to a certain action.

An aptitude test is any psychometric instrument that is used to predict the capabilities of a particular person. Means of measuring achievements, special abilities, interests, personality traits, or any other human quality or behavior may qualify as aptitude tests. The scope of use of the term "aptitude test" is usually limited to individual tests or batteries of special ability tests designed to measure the ability to master various disciplines or the practical mastery of specific skills and professional skills.

Intelligence tests such as the Stanford-Binet Intelligence Scale and the Wechsler Adult Intelligence Scale measure a set of special abilities. The results obtained with their use significantly correlate with the success of activities in a wide range. However, these tests are characterized by low accuracy of adjustment, that is, their correlations with academic performance in special industries are usually low. In contrast, special ability tests have high precision, their average correlations with performance across a broad spectrum are generally lower than those for general intelligence tests, but the correlation of a specific test with performance in a well-defined domain is higher.

Initially, the developers of general ability tests believed that such tests measured innate learning potential. Therefore, the performance of such tests should not be influenced by educational and training experience. However, performance on other ability tests, such as motor agility measures, improves significantly with practice.

Learning aptitude tests predict success in narrow areas such as mathematics, music, native language, art, and are suitable for placing students into specialization. They often have a broader scope than achievement tests, but it is often very difficult to distinguish them on the basis of specific tasks. The main difference between them is their purpose: aptitude tests involve learning; achievement tests assess past learning and current knowledge. The reason for the confusion is that many achievement tests are more accurate in predicting future achievement than some aptitude tests, especially when the intended achievements are in a narrow domain. A. Anastasi in his work “Psychological Testing” notes that the difference between tests of ability and achievement can be displayed on a continuum, at one end of which there are tests of specific, school achievements (for example, tests given by a teacher for use in his class), at the other - tests general abilities (for example, intelligence tests). Aptitude tests such as the Academic Assessment Test (SAT) and the Graduate Record Examination (GRE) would fall in the middle of this continuum.

Based on the model of the "Army A-test" (developed in the USA in 1917), numerous tests were created to measure intelligence - IQ tests. If the performance of a large group of children on an intelligence test is plotted in the form of a graph showing the frequency of occurrence of each indicator, the result is a normal distribution curve. The mean (average score) is always 100, and the standard deviation is approximately 15. Children whose score does not reach 70 (bottom 2% of the population) are considered mentally retarded or mentally retarded, and children with scores above 130 (top 2% population) are sometimes classified as gifted.

Multifactor ability tests contain a battery of subtests that measure a wider range of abilities than IQ tests. The information obtained with their help is useful in vocational and educational counseling. The battery of sub-tests is standardized on the same people, which allows comparison across different subtests and identification of weak and strong abilities. Examples of aptitude test batteries are the Differential Aptitude Tests (DAT), the General Aptitude Test Battery. (GATB)t which is used in career counseling, selecting professions based on a system of patterns of professional suitability.

A commonly used DAT covers eight subtests: Verbal Reasoning, Number Manipulation, Abstract Reasoning, Office Speed ​​and Accuracy, Mechanical Reasoning, Spatial Relations, Spelling and Word Use. indicators on the subtests “Verbal Reasoning” and “Manipulation of Numbers” gives a complex indicator comparable to general IQ indicators on the Wechsler Intelligence Scale for Children (WISCR) or according to the Stanford-Binet scale. DAT used in working with students in grades 8-9 to provide them with information for planning further education.

Multifactor aptitude tests also include:

- "U.S. Armed Forces Proficiency Battery" (ASVAB)

- "Aptitude test for those who cannot read" (NATB)

- "Complex battery of abilities";

- "Guilford-Zimmerman Ability Battery";

- "International Primary Factors Test Battery";

- "National Readiness Tests" (MRT);

- “God Knows Test of Basic Concepts” (OTBC).

There are also special ability tests to predict success in specific industries, assessing clerical and shorthand abilities, vision and learning, hearing, mechanical abilities, musical and artistic abilities, and creativity. To select for specific specialties use:

- Academic Aptitude Test (SAT)

- "American College Testing Program Test Battery" (ACT)

- "Law School Admissions Test" (LSAT)

- "Test for applicants to medical college" (IRU).

Aptitude tests must be valid and reliable. It is critical that they show predictive validity, that is, the extent to which test scores can predict a given criterion. Indicators of aptitude tests are used not to determine the success of completing the tasks contained in them, but to predict a certain relevant criterion (for example, the Miller Analogies Test can be used to predict the success of postgraduate study). Usually, correlation coefficients are used to describe the predicted relationships, while correlations of 0.40 to 0.50 are considered acceptable.Some aptitude tests, especially general intelligence tests such as the Stanford-Binet Intelligence Scale, are also desirable to have construct validity.

Knowledge of aptitude test scores can help teachers predict student success and develop personalized approaches to their learning. In career counseling, aptitude tests help identify differences in abilities and determine the balance of strengths and weaknesses of the person being counseled in terms of the skills necessary to master various professions. These results also help counselors diagnose the causes of underachievement. For example, IQ tests may show that a child is bored in class or frustrated with school. Ability tests are also used to identify mental retardations.

In situations where a limited group of students needs to be selected from a large number of candidates, aptitude tests can provide a basis for comparison between these individuals, and then, in combination with other sources of information, test scores influence the results of the selection of certain children.