A pentester’s perspective – is AI after our jobs?

Written by Kerry

April 18, 2023

Blog - Featured Image-min

One thing that’s hard to avoid on social media recently is AI – and in particular, Chat GPT* which has catapulted its way into virtually everyone’s feed. 

No longer a subject of discussion amongst those working in or with an interest in tech, there is one main thread, whatever industry you work in; is AI coming for our jobs?

Our Technical Director and Head of Training Dan Cannon has been presenting on the subject of AI in Cyber at a number of events recently (including live development of tools), which has resulted in some thought provoking and lively conversations.  

In this blog, we look at a brief overview of AI, as well as some of the changes, potential implications and possibilities, including its use in pentesting. 

The one thing that does seem certain, as the powers of AI develop, along with new entrants to the market, this discussion isn’t going anywhere soon!

Let’s kick off with a bit of a background.  Currently, AI fits into four main types:

Reactive AI: The most basic type of AI, it provides predictable output based on input, but it’s not able to either learn actions or conceive past or future.  Examples include Deep Blue (IBM’s chess ‘champion’) or Alexa.

Limited AI: Unlike reactive AI, limited AI learns from the past and builds experiential knowledge.  Using a combination of historical data combined with pre-programmed information means it can make predictions and perform complex tasks, although memory is limited and it can only recall data for a specific period of time.  Examples include autonomous vehicles and ChatGPT.

Theory of Mind AI: Whilst purely conceptual at the moment, progress is happening.  Theory of mind understands that people or things in its environment can alter feelings and behaviour.  Think Sophia the Robot, as a good example of working towards this type of AI.

Self-Aware AI: The most advanced form of AI, machines will be aware of their own emotions and that of those around them, while also having aims of self-determination.  Whilst this AI will have needs and desires, it is only theoretical at the moment.

The fear of AI replacing human roles is nothing new. Throughout history, many inventions, developments, and automations have changed the way we work and the jobs we do. If you cast your mind back to the introduction of robotics to car assembly lines, existing workers immediately feared they would lose their jobs. And whilst this was the case, at the same time new jobs were created; some of which simply didn’t exist previously.

AI is now driving the fourth industrial revolution, or Industry 4.0. But whilst some see AI as a threat, many others see it as an opportunity.

As an example, think of all the mundane or repetitive tasks that are a daily necessity for a myriad of roles, regardless of sector or seniority. Now imagine that AI could take over those tasks. If you’ve played around with ChatGPT, you will have seen how quickly it can write a blog, a song, an essay or a string of code. So, what could you take off your to-do list – or how much more quickly could you create first drafts of everything from business plans to proposals? Although, much like a student researching on Wikipedia, fact checking is essential to avoid disinformation.

And whilst we are considering the human jobs AI can undertake, what about the jobs humans can’t do? Even if its just the fact that AI doesn’t need breaks, it doesn’t need food, and it doesn’t need to sleep. We’ve seen massive benefits and advancements from the use of technology in the medical sector.  Imagine the impact of self-aware AI in patient care, where a robot could recognise and address the signs or symptoms of fear, pain, discomfort or distress.

Or closer to home, in pentesting, using AI to identify vulnerability trends, and then creating working proof of concept code to demonstrate the potential risk of a cyber incident. The ability to do this at speed can be used to identify new and innovative attack pathways and explore the most effective and efficient remediation plans.

Like anything else, there are pros and cons to AI in its current state. The primary positive is the potential for customised AI solutions, based on specific data or use cases, as well as an increase in system stability, speed to complete tasks, complexity of data analysis or automation of repetitive tasks.

However, on the flip side there are limitations, such as AI’s lack of understanding around nuances or complex concepts, reliance on data that may lead to biased or unreliable output, and a lack of creativity due to the current limitation of being unable to produce truly unique content.

In summary, we can use AI to automate and simplify elements of our jobs in the same way we have with other developments and innovations in the past – and then we add our complementary skills, or human intelligence, in areas such as context and risk.

Finally, a few words of caution. Firstly, if you are copying and pasting content or using commercially sensitive information in ChatGPT, then think about where it’s going, where it’s being stored and who can access it. And whilst we can use ChatGPT to our own advantage, it’s a sure bet that hackers are keeping one step ahead and doing exactly the same!

*In case you have been wondering about the GPT in ChatGPT, it stands for Generative Pretrained Transformer.

Be part of our community by doing the following:

Join our
mailing list

Join our
Discord channel

Follow us on
Twitter

Follow us on
LinkedIn

Follow us on
Instagram