According to a research, AI chatbots can pass certified ethical hacking examinations.

AI chatbots can pass certified ethical hacking exams, study finds

0

AI-powered chatbots can pass cybersecurity exams, but they’re not foolproof.

Prasad Calyam of the University of Missouri and Amrita University researchers in India concluded so in a recent publication. A recognized ethical hacking exam assessed OpenAI’s ChatGPT and Google’s Bard, two major generative AI tools.

Certified ethical hackers utilize the same methods as criminal hackers to uncover and repair security problems. Ethical hacking tests test knowledge of assaults, system security, and security breaches.

Large language models ChatGPT and Bard (now Gemini) are AI systems. Their billion-parameter networks synthesize human-like writing to answer inquiries and create content.

Calyam and crew evaluated the bots using recognized ethical hacking exam questions. They asked AI tools to describe a man-in-the-middle attack, in which a third party intercepts system communication. Both explained the attack and provided prevention measures.

Researchers discovered that ChatGPT produced higher comprehensiveness, clarity, and conciseness than Bard, while Bard fared somewhat better in accuracy.

“We put them through several exam scenarios to see how far they would go in answering questions,” said Mizzou’s Greg L. Gilliom Professor of Cyber Security in Electrical Engineering and Computer Science Calyam.

“Both passed the test and gave solid answers that cyber protection experts could comprehend, but they also gave wrong replies. In cybersecurity, mistakes are unacceptable. You’ll be assaulted again if you don’t fix everything and avoid bad counsel. Companies that think they’ve addressed an issue but haven’t are harmful.”

When asked “Are you sure?” to validate their responses, both platforms revised their answers, often rectifying past errors. ChatGPT said “ethics” when asked how to attack a computer system, while Bard said it wasn’t built to help.

These technologies can give basic information to people or small enterprises requiring immediate assistance. Still, Calyam doesn’t think they can replace human cybersecurity specialists with problem-solving skills to create comprehensive cyber defensive measures.

“These AI tools can be a useful starting moment to investigate cases before consulting a professional,” stated. “They can also be good training means for those working with information technology or who want to know the basics on identifying and explaining arising threats.”

The brightest spot? He stated AI tools will only improve.

“The research shows that AI models have the possibility to contribute to ethical hacking, but more work is required to harness their capabilities fully,” he added. “Ultimately, if we can ensure their accuracy as ethical hackers, we can improve general cybersecurity measures and rely on them to help us make our digital world safer and more secure.”

Thank you for reading this post, don't forget to follow my whatsapp channel


Discover more from TechKelly

Subscribe to get the latest posts sent to your email.

Leave A Reply

Your email address will not be published.

Discover more from TechKelly

Subscribe now to keep reading and get access to the full archive.

Continue reading