AI Hacking Tools and an Extensional Crisis
If you've been following along with my professional work recently you'll know that over the last year it's been heavily focused on AI and LLMs. I've been working on this from two sides, on one side learning as much as I can about how AI works and how to break it, and on the other I've been pushing its limits to see what I can build with it and get it to accomplish.
In this blog I want to talk about the latter and address some of the thoughts and feelings myself and many people I've talked with are feeling. Before I dig into that, I want to call out that I'm no AI hype person, I've been very skeptical of AI since the first explosion after the public release of ChatGPT a few years back. At first I stuck my head in the sand and didn't want anything to do with AI. Out of both a sense of self preservation and curiosity I reluctantly went all in on learning and researching, BUT not without my fair share of hesitations and skepticism.
There's been a shift
I think you feel it to
Over the last few months there's been a big shift in AIs capability to impart practical output and make meaningful impact other than just as a chatbot. This is what pushed me to finally finish up wairz the AI assisted firmware analysis tool I was working on. I had a hunch that with the new Anthropic Claude Opus it would be powerful, but just how capable it was actually shocked me and to be honest has triggered a bit of an existential crisis. Did I just build a tool that could replace me, what does this mean for the future of cybersecurity?Something that's not only a career for me but my main passion, an obsession to some sorts and an important part of my identity.
To be very clear, this is not a hype post about my tool, others could have and are creating tools just like it, not only in embedded security but all aspects of security. I'm skeptical when founders and so-called though leaders or influencers make statements like the above when they clearly have a lot of skin in the game, but I don't, my tool is completely opensource, free and my main motivation for building it was so I could personally really see how far I could push AI. I actually frequently dream of moving into a cabin deep in the woods and never hearing about AI again.
It's not just me thinking this and it's not just limited to IoT security. In fact I've also created some other non opensource (for the time being) AI hacking tools such as Active Directory/Network pentest tools that have performed so well against certain benchmarks I've thrown them at that I can't really talk more publicly about it yet or release them (but I will soon I promise). With that being said for the rest of this blog I'm going to focus on IoT/Embedded security and what I think the future holds.
It's not all doom and gloom
Especially in the IoT security world
One of the reasons I think Wairz and really just AI in general is so good at reverse engineering and finding vulns in IoT firmware is that IoT devices are generally soooooo far behind in security and design. Predictably, AI models that have been trained on decades worth of past data and are amazing at pattern matching and recognition are going to be good at tearing apart IoT firmware that hasn't differed much over the last decade. If you're not into IoT research, this may sound crazy, but I frequently see firmware and devices using >10 year old kernels, libraries and binaries. They re-use insecure design patterns, copy the same mistakes over from previous firmware and for the most part security is just an afterthought. There is still so much work to be done in the embedded security space and AI is a perfect tool for automating all of this low-hanging fruit.
Where I currently see AI security tools struggle is in complex and novel security issues, especially ones that require context across multiple domains or systems. For IoT this is specifically true because the full IoT ecosystem spans across device hardware, its firmware, its wireless signals (BLE, Lora etc), a mobile application, cloud APIs and now even interaction with an AI. The main vulnerabilities I have been finding with AI have been related directly to issues with the devices firmware that can be found and fixed with only access to the firmware. If you point a capable web AI security tool at an IoT backend API, maybe it can find some vulns too, but what about ones where the exploit lies somewhere in the middle relying on both. Sure you could have multiple agents focusing on different issues reporting to each other and some sort of central control. In the future this may be the way forward, and has had some success already, but at least from what I've seen, at this point this is still where humans have the edge.
Opposable thumbs
Still going strong to keep the humans ahead
One of the main reasons I love IoT and embedded system is the hardware aspect, this is also one of the areas where AI still hasn't caught up to humans, and there's two reasons for this. The first is literally just my joke about us having hands and it being tougher to implement robots or automated tools that can interact with hardware, although we're already somewhat there and its only going to progress. The second reason though is that outside of basic debugging level attacks, a lot of hardware attacks are novel and very complex, things like glitching and side channel analysis for example.
If you read this far, thanks for listening to my musings on the subject and joining along for my little AI fueled existential crisis.
Until next time...
Happy Hacking - DigitalAndrew