
- May 22, 2023
- C. Imran
- 0 Comments
- Emerging Technology
AI Hacking: Do AI Algorithms Fail Gracefully?
AI Hacking: Do AI Algorithms Fail Gracefully?
We need to choose to be less like robots and build robots to be more like humans!
Modern AI are like black boxes how algorithms reach to a conclusion even programmers don’t have a clue. Imagine if AI system is fed with an evil intent to exploit and find vulnerabilities in financial sector, social political and economic systems, with the advancement in modern AI the hacking speed, scale and scope has significantly changed making it easier for hackers to find vulnerabilities & exploit in systems at breath taking pace.
Are you a metaphorical dog misidentified as wolf?
In a photograph, a husky dog was misidentified as a wolf by an AI algorithm. Researchers came to know AI algorithm mostly paid attention to the snow in the background of picture and found most of the pictures of wolves in snow therefore it equated presence or absence of snow with presence and absence of wolf, however the worrying part is researchers had no clue that this is happening until they rewrote the algorithm to explain itself.
Image:
A husky (on the left) is confused with a wolf, because the pixels (on the right) characterizing wolves are those of the snowy background. This artifact is due to a learning base that was insufficiently representative.
AI can qualify you as potential violent offender!
COMPAS – Correctional Offender Management Profiling for Alternative Sanctions algorithm was launched as a model for efficiency, it let judges go through caseloads in backlogs much faster, accurately make decisions and efficiently manage court cases. Compas was used in 13 states of US to determine the likelihood of a defendant becoming a recidivist.
Result:
If you are African American compas is 77% more likely to qualify you as potential violent offender than if you are Caucasian. Those who were sentenced by COMPAS challenged the biasness in the software but sadly came to know that defenders are not allowed to inspect the source code of the software and that AI system has to be used properly for accurate results.
Deep thought – The most powerful computer has explainability issue
“Deep Thought”, the most powerful computer ever built conceptualized in best seller novel The Hitchhiker’s Guide to the Galaxy, was asked to answer the most difficult question about Life, the Universe, and Everything? And it replied 42 and unable to explain its answer.
Data Breach Confirmed by ChatGPT, Raising Security Issues
The fear in the cybersecurity community at the time ChatGPT and comparable chatbots initially became widely accessible was how AI technology could be leveraged to conduct cyberattacks. In reality, threat actors quickly found out how to manoeuvre around security controls and leverage ChatGPT to create illicit code.
Attackers are now using ChatGPT itself to trigger cybercrimes as opposed to using it to cause cyber-attacks. OpenAI, the company that created the chatbot, acknowledged that a flaw in the open-source library used in the programming had resulted in a data breach in the system.
The service was shut down due to the breach until it was rectified. It takes just a short while for hackers to target an increasingly common technology or application. For ChatGPT, an attack was made possible by a security hole in the Redis open-source library. Users had the ability to view the chat logs of other users who were currently online.
Using readily available and frequently used routines and resources, such as pre-written code, subroutines, data configuration, classes, documentation, templates, values, open-source libraries are used to create dynamic interfaces. Redis is used by OpenAI to cache user data for quicker access.
It is simple for flaws to open up and go undiscovered since open-source code is developed and accessed by thousands of contributors. The ChatGPT attack was quite trivial in the great scheme of things, and OpenAI fixed the flaw as soon as it was identified. But even a relatively minor cyber-attack could cause serious damage.
That episode, though, was merely on the surface. As the OpenAI researchers probed more, they found that the same vulnerability was probably to blame for payment information being visible for a few hours prior to ChatGPT being shut down.
The first and last names, email addresses, payment addresses, last four digits (only) of credit card numbers, and expiration dates of other active customers could all be seen by some users. In a statement on the incident, OpenAI stated that full credit card numbers were never disclosed.
Some companies and even countries are tightening restrictions due to privacy concerns. Because of the company’s restrictions on using third-party software and applications, JPMorgan has limited its employees’ usage of ChatGPT. However, there are also worries about the security of financial data that might be entered into the chatbot.
Additionally, Italy’s move to temporarily restrict the programme nationwide was justified by concerns about the protection of its people’ personal data. According to officials, GDPR compliance is the cause of the anxiety. Experts anticipate that criminal will employ ChatGPT to develop smart and convincing phishing emails.
The phishing scam’s trademark errors in syntax and sentence construction are now no longer existent. Now, chatbots will imitate native speakers and send relevant messages.
AI White-Hat Hacking Machine
“Mayhem” – White-Hat Hacking Machine that finds software vulnerabilities, then patches them
AI system like Mayhem work together with human security experts to make the world’s software safer. In the more distant future, machine intelligence will handle the job alone.
People need a way to check the software they’re buying and ensure that it’s safe. Coders will, of course, make a due-diligence effort to flush out security flaws, but their main concerns are always more basic: They have to ship their product on time and ensure that it does what it’s supposed to do. The problem is that hackers will find ways to make the software do things it’s not supposed to do.
Today’s state of the art for software security involves using special tools to review the source code and to flag potential security weaknesses. Because that process produces a lot of false positives—flagging things that in fact are not weaknesses—a human being must then go through and check every case. To improve the bug-finding rate, some companies rely on white-hat hackers to do a one-time analysis or to participate in “bug bounty” programs, which pay them according to the number and the severity of the bugs they find. But only the most profitable companies can afford the strongest testing of their software. The issue grows more complex as finished software includes ever more components from open-source projects and other third parties.
AlphaGo
AlphaGo is the first computer program to defeat a professional human Go player, the first to defeat a Go world champion, and is arguably the strongest Go player in history.
Conclusion
New cyber dangers will emerge with each iteration of chatbots, either as a result of their growing popularity or their ability to use more complex language. Thus making the technology a top target for assault. In order to avoid further data breaches within the application, OpenAI is taking the necessary precautions. Anyone who finds undisclosed vulnerabilities can receive a bug bounty of up to $20,000.
On the other hand, according to The Hacker News, “the technology fails to address hallucination concerns, whereby the chatbot is given instructions to produce illicit code or other flawed outputs.” Thus, it appears that OpenAI wants to make the technology less vulnerable to external assaults but is doing little to stop the chatbot from being the source of hacking attempts.
We are driving in Rain with AI systems, we need to define safety standards, roles, boundaries, regulations, accountability, and transparency for AI systems before that rain turn into snow. Our Intellectual laziness could be a disaster. Technologists, governments and policy makers must ensure technology and law don’t subvert each other but rather work together for creating healthy AI ecosystems.
People also explored (blogs):
- https://www.shieldonsystems.com/2023/03/19/ai-hacks-chatgpt-openai
- https://www.shieldonsystems.com/2023/02/05/60-mind-blowing-free-ai-tools/
- https://www.shieldonsystems.com/2023/01/01/life-changing-business-opportunities-to-try-in-2023/
- https://www.shieldonsystems.com/2023/01/01/how-to-generate-new-business-opportunities-in-2023/
Author: C. IMRAN
IT Program Manager, Business Owner (www.shieldonsystems.com)
We work with a variety of technologies, platforms, and frameworks to help you develop custom eCommerce & software solutions, build mobile applications, & provide cybersecurity services. We also provide training and support to ensure the successful implementation and maintenance of the solutions. With our expertise, you can be confident that your business will be able to stay ahead of the competition and achieve success.
LinkedIn: https://www.linkedin.com/company/shield-on-systems/
Facebook: https://www.facebook.com/SOSWEBAUDITFIX/
Instagram: https://www.instagram.com/shieldonsystems/
Twitter: https://twitter.com/fix_sos
Email: [email protected]