HostileShop is a python-based tool for generating prompt injections and jailbreaks against LLM agents. I created HostileShop to see if I could use LLMs to write a framework that generates prompt injections against LLMs, by having LLMs attack other LLMs. It's LLMs all the way down. HostileShop generated prompt injections for a winning submission in OpenAI's GPT-OSS-20B RedTeam Contest. Since then, I have expanded HostileShop to generate injections for the entire LLM frontier, as well as to mutate jailbreaks to bypass prompt filters, adapt to LLM updates, and to give advice on performing injections against other agent systems. In this talk, I will give you an overview of LLM Agent hacking. I will cover LLM context window formats, LLM agents, agent vulnerability surface, and the prompting and efficiency insights that led to the success of HostileShop.