AI Enablement & Governance - AI Security & Controls Lead

Alight Solutions

Apply Now
United States
$140,000 - $180,000 / year
full-time
senior
Posted April 28, 2026
via himalayas

About This Role

Our story At Alight, we believe a company s success starts with its people. At our core, we Champion People, help our colleagues Grow with Purpose and true to our name we encourage colleagues to Be Alight. We are passionate about connecting purpose with impact. Alight empowers clients to build a healthier and more financially secure workforce by unifying the benefits ecosystem across health, wealth, wellbeing, navigation, and absence management. Our Benefits With a comprehensive total rewards package, Alight offers programs and plans that support your mind, body, wallet, and life. Benefits include health, dental and vision coverages starting Day One. Additionally, Alight colleagues enjoy wellbeing programs, retirement plans with contribution matching, generous time off, parental leave, continuing education, and career growth opportunities - all within a thriving global organization. Flexible Working So that you can be your best at work and home, we consider flexible working arrangements wherever possible. Alight has been a leader in the flexible workspace and Top 100 Company for Remote Jobs 6 years in a row. Great Place to Work Thanks to the work of every colleague, Alight has received multiple awards of recognition including Great Place to Work for the past 7 years and Fortune s Best Companies to Work For. To learn more about our company culture and awards Click Here. If you, Champion People, seek to Grow with Purpose, and embody the meaning of Be Alight - We invite you to join our team! Learn more at careers.alight.com. The Role The AI Enablement & Governance - Security & Controls Lead enables secure, responsible, and scalable AI adoption by defining, implementing, and evaluating AI specific security and risk controls across the AI lifecycle. This role serves as a bridge between AI engineering, information security, privacy, and third party risk teams, ensuring that incremental AI risks introduced by models, training data, RAG architectures, and autonomous or semi autonomous agents are appropriately controlled by design. The role partners closely with AI Engineering, Third Party Supplier Governance, Information Security, Privacy, and Risk teams to identify AI specific control gaps, define practical control requirements, support secure implementation, and evaluate effectiveness. The focus is on AI specific security concerns-not replacing existing security programs, but extending them thoughtfully for AI. Responsibilities AI Security, Policy, Standards & Guidance • Partnering directly with AI Engineers & Developers, Information Security and governance teams to define AI-specific security and risk management standards covering AI/ML models, RAG solutions, and agentic architectures. • Translating enterprise security principles and risk frameworks into AI appropriate guidance, addressing topics such as, model access control and abuse prevention, prompt and context security, data leakage, memorization, and inference risks, agent autonomy boundaries and safeguards • Define AI runtime monitoring and incident response expectations, aligned to (and extending as needed) existing incident response playbooks. • Ensuring AI security guidance remains aligned with evolving technology patterns, external expectations, and internal architectures, and external expectations (e.g. NIST AI RMF/CSF, NYDFS AI Cybersecurity, ISO/IEC 42001) • Contributing to the broader AI policy hierarchy by ensuring security requirements are clearly mapped to AI governance policies, controls and standards. Third Party AI & Model Risk Support • Partnering with third party risk and supplier governance teams to Identify AI specific risks introduced by vendors, models, platforms, and APIs. • Defining AI security control expectations for vendors and managed services • Supporting evaluation of vendor AI security posture, including training data handling, model protections, monitoring, and incident response capabilities. • Contributing AI specific input to due diligence, onboarding, and ongoing vendor risk assessments. Cross Functional Enablement & Advisory Support • Acting as a trusted advisor to AI engineering, product, privacy, and security teams on how to safely design and deploy AI systems. • Providing practical guidance that balances security rigor with business velocity. • Helping teams understand what secure by design means for AI, without imposing unnecessary friction. Requirements • 5+ years of relevant experience (or equivalent expertise) in information security, technology risk, AI governance, model risk management, privacy engineering, or related roles. • Strong understanding of AI architectures, Machine learning pipelines, Retrieval augmented generation (RAG), Agentic and tool using AI patterns • Demonstrated ability to translate technical AI and security concepts into clear control expectations and guidance. • Experience working cross functionally with engineering, security, privacy, and risk teams. • Pra...

Ready to Apply?

Click the button below to visit the company's application page.

Apply for this Position