Our trend book Work Tools of Tomorrow – Poland is now live! πŸŽ‰Check it out
AI

"Is there a pilot on board? Exploring AI, AR, and VR in training"

27 May 2024

How can AI help in training programs for pilots and drivers?

We discuss using AI, AR, and VR in training tools with Aneta Kulma, Member of the Board and Chief Financial Officer at ETC-PZL Aerospace Industries Sp. z o.o.

Autentika: You work in an environment most of us know from movies – flight simulators, military technology, pilot training. How do you perceive this industry? What fascinates you most about this work?

Aneta Kulma: I am fortunate that my work is always an interesting challenge for me, an opportunity for continuous learning, acquiring new experiences, and above all, getting to know interesting and wise people with whom I can undertake projects that create value, such as improving the quality of life in various aspects. That was the case when I worked in the e-health sector in a unit subordinate to the Minister of Health. It is also the case now, as I am with ETC-PZL (the abbreviation I most commonly use when referring to my current job), an engineering company with traditions and a history dating back to PZL Warsaw-OkΔ™cie. I am proud that I can participate in creating modern training systems based on simulation and virtual reality technology that allow for 100% safe, efficient, effective, and fast training.

What is fascinating is how training possibilities change over time, thanks to technological progress.

In the past, an aircraft pilot had a limited range of situations to practice, dependent on many factors, including those completely beyond their control, such as the training location, time, geographic conditions, weather, and climate, etc. And today – thanks to simulators – they put on goggles, enter the cockpit, and see a virtual world realistically similar to the real one.

The simulator allows for designing any number of training scenarios that allow for practicing the most difficult situations in the air and on the ground in a completely safe manner. Perhaps we will never experience such situations in the real world, but we will certainly be prepared for them. This is a completely different quality of training, which translates not only into the comfort of the trainee, the effectiveness of the trainer's work, but also into improving safety on land, road, or in the air.

I am delighted to work with amazing people, enthusiasts in their field, who have dedicated their entire lives to the development of simulation technology, starting in times when computers were heavy and large boxes. And to this day, they participate in creating new trends.

Our simulators are innovative world-class equipment – when you sit in the cockpit or put on the goggles, you really feel like you are in an airplane or a car. This is thanks to VR technology, which easily deceives the human brain, completely cutting us off from the external world and immersing us in this virtual one.

Even in the case of experienced pilots or drivers?

The phenomenon we're talking about is immersion, when participants immerse their senses in the artificially created world. In the case of VR technology, this is 100%. What does this mean in practice? The participant is completely cut off from stimuli in the real world, allowing them to fully believe in the reality of the world they see and hear. Their mind "buys" the illusion to such an extent that a ride in a virtual Ferrari evokes the same emotions as if they were driving the real thing. By comparison, when watching a movie or playing a computer game, we also experience immersion – but it's not full, not all our senses are engaged.

What's amazing are the spontaneous reactions of people who enter a simulator or put on VR goggles. Even though they know where they are and that what they see is not a real image, their mind and body feel like they are in a real machine, soaring up, rapidly descending, hitting an obstacle, or performing other maneuvers, etc. In the world of VR, a person confronts their psyche and loses control over the fact that their mind and body perceive the virtual world in real-time, as if it were 100% real. This is the true power of virtual reality.

Very often, people come to training sessions saying that nothing will surprise them anymore, that they have been flying/driving for many years. Then they are given a task in the simulator, and can't believe how real their experience is.

I am deeply convinced that there is always something we don't know, something that will be a new experience for us. Especially since scientific research indicates that a person in an immersive VR environment learns and retains information much more effectively.

So, machines can teach us something.

We practice in a simulator in a virtual world, but we experience real emotions – we get nervous, stressed, and, what's important, this can be examined. It's not enough for a pilot or driver to practice – it's also vital to know how they react during the exercise, how a given exercise affects them. If someone has a problem with vertigo in the real world, they will have the same problem in the simulator. The body cannot be deceived – our fears or limitations will still surface. This kind of technological progress is wonderful because it doesn't compete with humans, but rather supports their experience, expands capabilities, and teaches proper habits.

Is artificial intelligence involved in this learning process?

I would say that simulation technology is a natural field for the development of artificial intelligence algorithms. And vice versa – artificial intelligence supports the development of simulations, for example, by performing complex calculations or aerodynamic models.

The scientific community sees the potential for further AI development in the process of personalized training, in so-called "adaptive learning," which aims to adjust the scope and duration of training to the individual needs, limitations, predispositions of trainees, and to their current progress in training. AI algorithms have the chance to play the role of a "personal trainer" in this regard. Another possible area for AI development is the automation of the training process and making it as autonomous as possible over time. However, it takes time because AI, in order to teach others, must itself be fueled with knowledge.

I believe that AI is a natural stage in the development of simulation technology. We can't stop this progress, and we shouldn't. In some areas AI should replace humans, freeing up their time for more valuable tasks, especially where there are competency gaps.

Does AI make us lazy? If the machine can think for me, then maybe I don't have to?

I belong to the generation where we learned everything on our own. Today, I do see that our awareness is weakened. We are less focused, we simply don't do something if we don't have to. Soon, a new generation will enter the job market – and these young people already use generative AI, for example, to create content. AI today is like an encouragement "don't think, don't do, it's ready-made". On the other hand, I believe that there will always be ambitious people among us who will prefer to work independently. It is important that we do not uncritically embrace AI, that we remain critical and constantly ask ourselves whether we are heading in the right direction. Technological progress provides opportunities, but it should not replace thinking.

Can you imagine a few years from now we'll have autonomous planes or helicopters where AI will be the pilot and humans merely assistants? Recently, you shared information that in China, artificial intelligence piloting an aircraft in a simulated aerial combat scenario left no chance for the human pilot.

I cannot imagine autonomous aircraft. Perhaps because I have a point of reference – I am familiar with manned aircraft, I am attached to the experience of flying with a human. But maybe future generations will have different references, they will be accustomed to the "autopilot" function. We compare two options, but future generations won't.

So how do we approach responsibility for any potential errors? How do we incorporate ethical issues related to AI into a framework? Many countries are beginning to develop standards, but do such top-down actions have deeper meaning?

Technological development usually outpaces legal regulations and ethical codes. For now, we still have a bit of a "wild west" situation. However, what's important is that artificial intelligence is still in its early stages of development, it is fueled by our experiences. Humans learn from mistakes, and AI learns the same way. So it will also develop, based on us as its source of knowledge and experience. AI won't immediately pilot an aircraft on its own; first, it will have to fly multiple times with an experienced pilot, just as a pilot flies with autopilot now.

Problems, including those of an ethical nature, may arise when humans want to use AI's potential in a way that goes against ethics – against people and, more broadly, against the surrounding reality.

The second source of risk comes from AI itself and concerns situations where AI becomes independent of humans and their experiences, transitioning to a stage of autonomous development. We must be prepared for these eventualities by educating society about the threats posed by AI and ways to counteract them, as well as by taking systemic actions. AI is undoubtedly a field that carries risks beyond the scope of individual countries and requires regulation at a global level, similar to the issue of cybersecurity.

Another challenge is regulating liability for AI errors, who will bear it, on what principles, etc. At present, we have no legal regulations either at the national or European level, which is a hindrance to the faster development of AI, especially in medicine, which is fraught with a high risk of errors. The discussion is ongoing at various levels, from national governments to the European Commission, and eventually, it will be regulated – but the process will take time, and therefore the pace of AI development will not be rapid.

For now, we're still in the stage of building algorithms. It will be a long time before they are sufficiently shaped to be entrusted with truly responsible tasks. AI replaces humans in relatively simple, repetitive tasks where the consequences of any error are small, and we accept the risk of their occurrence. But imagine artificial intelligence, for example, making a wrong diagnosis of a patient – that's an entirely different matter. Therefore, these less risky areas will develop faster, while those where people's health or lives are at stake are still a matter for the future.

You mentioned that experienced individuals sometimes attend simulator training, convinced they've "seen it all," yet the technology surprises them. However, do people ever come who don't trust this technology at all? Who resist it or even question the sense of simulation?

Some experience a certain kind of fear – they're afraid of being "examined/scrutinized" by a machine. Simulation allows for a deep look into our reactions. It reveals whether we're nervous, whether our blood pressure rises, whether we encounter any difficulties. This is, of course, a plus because we can truly select individuals who meet specific criteria for a pilot's or driver's job.

In a conversation with a doctor, many things can be hidden, but simulation technology supported by medical knowledge exposes all weaknesses. Now the question arises: are we moving towards increasing safety and selecting individuals who meet 100% of the requirements, or are we loosening those requirements? We have a choice; we make decisions more consciously. And that's the added value of AI.

Undoubtedly, we should prioritize the safety of the public over individual needs. I also believe that a candidate who, for example, wants to become a pilot but doesn't meet 100% of the requirements, also benefits in the long run. Why? Because they receive information about it early and won't waste two or three years on training only to later encounter problems. They can choose a different career path more quickly.

I have a lot of experience with students or graduates of military schools. Many of them feel called to be pilots, soldiers, participate in missions. But the calling alone is not enough; you need the right predispositions, and the earlier they are confirmed, the better. It's also worth noting that the military and pilots are trained by the state, so when someone decides in the last year of their studies that it's not for them, it's a waste of resources. It's about using technology for a good purpose – raising awareness that we are in the right place and ensuring safety.

Could this be the price we pay for increased safety – higher entry thresholds for some professions and difficulties in acquiring suitable candidates? In other words, will there be fewer of them, and therefore more expensive to acquire?

If we're talking about health and life safety issues, as well as national security, we honestly shouldn't even talk about money. There is a need, and it must be addressed, period. However, in the private business sphere, it may be different. One can consider whether buying a simulator for testing is worth it. But even here, solutions can be found, such as leasing, renting, leasing, using mobile simulation centers, participating in stationary training at training centers, maybe even a central hub that will ensure everyone has equal access to modern technology.

You're developing modern technology, but what's the level of digital advancement within your organization itself? Are you implementing automation, leveraging AI on the back-office side?

For years, the company has been using ERP-class solutions to handle internal processes, allowing for the automation of various business functions, multidirectional exchange of information, in line with the company's structure and needs. Every year, we expand the system with additional areas of the organization. We also use commonly used tools for remote work, project and project portfolio management, reporting, work scheduling, etc.

Regarding the use of AI as a supporting tool in the back-office area, we are at the stage of analyzing available market solutions, comparing benefits, and estimating costs. However, it's impossible not to notice market trends.

More and more companies offer AI-based solutions allowing for the automation of repetitive office tasks referred to as "digital employees" or "virtual assistants." This trend will continue to develop and eventually bring changes to the employment structure.

For example, we will increasingly have less need to involve people in the accounting of simple, repetitive operations. Here, many tasks will be performed by automation, naturally resulting in reduced demand for employees with such skills. However, AI skills currently have limitations – we're talking about simple jobs, not specialized knowledge allowing for determining the company's strategy, managing technical areas, performing locksmith-mechanical work, etc. This is an attractive vision for employers as it will primarily allow for cost savings and optimization, but on the other hand, there are concerns among employees about job loss and/or the need for retraining.

So, in some way, AI will replace humans.

In a typical business mindset, costs matter, and cutting personnel costs is the easiest. So, if a company is looking for savings and can introduce a digital employee, and in return dispense with a regular employee, it may indeed be an attractive vision for business managers. Especially since a digital employee doesn't get sick, doesn't go on vacation, isn't subject to the eight-hour norm, etc. If I, as an employer and businessman in one person, were faced with such a decision right now, I would do everything to find new occupations for people, but it's not always possible, and then business dilemmas can be really tough nuts to crack.

So perhaps the development of AI will lead to a temporary decrease in job supply for professions that artificial intelligence can replace. However, I believe that in many respects, we as humans are irreplaceable, even in those professions where specialists have been lacking for years. So, I see more opportunities than threats and risks here. Fortunately, not everything will be done by robots and algorithms; at least not today.

Aneta Kulma is a manager with over 25 years of experience in managing various areas of organizations, including project and technical areas related to IT and back-office. She holds an MBA degree and numerous postgraduate studies and is a member of the Institute of Internal Auditors IIA Poland. She specializes in process restructuring and managing complex projects within an organization. With a strong background in IT, the medical services market, and R&D projects, she has extensive experience in both public administration and the private sector. Aneta supports organizations in increasing process efficiency and profitability. She is interested in modern ML, VR, and AI technologies. For 10 years, she was associated with the e-Health Center and is currently a Member of the Board and Chief Financial Officer at ETC-PZL Aerospace Industries Sp. z o.o.

Share Article:
Let's talk
If you feel we are a good fit, email us today, or call Slawek +48 603 440 039You can also follow us on Linkedin, Facebook or Dribbble.