
Software Engineer ll, AI Agent Security
3 Tage altAngaben zum Job

Firma | |
Kategorie | Informatik | Pensum | 100% |
Einsatzort | Zürich |
Job-Inhalt
Minimum qualifications:
- Bachelor’s degree or equivalent practical experience.
- 1 year of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript).
- 1 year of experience with data structures or algorithms.
- 1 year of experience building software for data privacy or security (e.g., identity and access management).
Preferred qualifications:
- Experience in AI/ML security research.
- Experience in programming language suitable for security research and prototyping (e.g., Python).
About the job
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.Ensure the safety and security of Google's AI agents by developing and scaling robust security architectures, pioneering novel defense strategies, and influencing both internal development practices and the broader industry's understanding of agent security. We focus on preventing unintended or harmful agent behaviors ("rogue actions") and protecting sensitive user data.The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.
Responsibilities
- Conduct research to identify, analyze, and understand novel security threats, vulnerabilities, and attack vectors targeting AI agents and underlying LLMs (e.g., advanced prompt injection, data exfiltration, adversarial manipulation, attacks on reasoning/planning).
- Design, prototype, evaluate, and refine innovative defense mechanisms and mitigation strategies against identified threats, spanning model-based defenses, runtime controls, and detection techniques.
- Develop proof-of-concept exploits and testing methodologies to validate vulnerabilities and assess the effectiveness of proposed defenses.
- Collaborate with engineering and research teams to translate research findings into practical, scalable security solutions deployable across Google's agent ecosystem.
- Stay current with the AI security, adversarial machine learning, and related security fields through literature review, conference attendance, and community engagement.