Responsible AI & Research Integration Cognizant

  • company name Cognizant
  • working location Office Location
  • job type Full Time

Experience: 10 - 10 years required

Pay:

Salary Information not included

Type: Full Time

Location: Karnataka

Skills: Science, Systems thinking, Strategic influence, Transparency, product strategy, Market Intelligence, Partnerships, Thought Leadership, Trust, strategic partnerships, Scenario planning, Regulatory Compliance, Responsible AI, Research Integration, AI safety, Translation capabilities, Model alignment, Behavioral safety, Agentic behavior, Oversight mechanisms, Ecosystem Development, Fairness, Interpretability, Riskaware AI, AI governance frameworks, Regulatory landscapes, Compliance Requirements, Cybersecurity implications, Multiagent coordination, Autonomous system oversight, AI Risk Management, Safety infrastructure, Standards initiatives, AI policy dialogues

About Cognizant

Cognizant is a multinational information technology services and consulting firm headquartered in the United States. Its headquarters are located in Teaneck, New Jersey, in the United States. Cognizant is listed on the NASDAQ-100 under the symbol CTSH.

Job Description

As a Senior Scientist specializing in Responsible AI & Research Integration, you will be playing a critical role in bridging the gap between academic research in AI safety and the practical development of AI products. Based in Bangalore, this high-impact position requires 10 to 18 years of experience in the field. Your primary responsibility will be to advance the frontiers of Responsible AI and AI safety through both foundational and applied research. Approximately 60% of your time will be dedicated to conducting research on topics such as model alignment, transparency, behavioral safety, and oversight mechanisms for autonomous systems. The remaining 40% will involve translating these research insights into practical tools, features, and governance components that can be integrated into internal systems and external offerings. Collaboration will be key in this role, as you will work closely with the AI Research Lab and Responsible AI Office to define research agendas and translate findings into product features and governance frameworks. Additionally, you will be involved in building partnerships with academic labs, participating in external working groups, and providing strategic intelligence on the evolving ecosystem of responsible AI technologies and companies. Your responsibilities will also include developing product roadmap specifications, evaluating early-stage startups in the AI safety space, and monitoring the competitive landscape to identify market gaps in responsible AI tooling. Building collaborative relationships with academic labs, research consortia, and external fellows, as well as representing the company in research summits and public forums, will be part of your external engagement activities. To excel in this role, you should have a PhD in Computer Science, Artificial Intelligence, or a related discipline, with a strong publication record in AI safety research. Experience in translating research into production-ready tools, collaborating with interdisciplinary teams, and evaluating early-stage AI companies will be essential. Strong communication skills, the ability to synthesize insights from academic research, and a network within the responsible AI research community will also be valuable assets. If you are passionate about driving advancements in Responsible AI, thriving at the intersection of science, systems thinking, and strategic influence, and have a track record of contributing to cutting-edge research and product development, this role offers a unique opportunity to make a significant impact in the field.,