Amazon was one of the tech giants that agreed to a set of White House recommendations regarding the use of generative AI last year. The privacy considerations addressed in those recommendations continue to roll out, with the latest included in the announcements at the AWS Summit in New York on July 9. In particular, contextual grounding for Guardrails for Amazon Bedrock provides customizable content filters for organizations deploying their own generative AI.

AWS Responsible AI Lead Diya Wynn spoke with TechRepublic in a virtual prebriefing about the new announcements and how companies balance generative AI’s wide-ranging knowledge with privacy and inclusion.

AWS NY Summit announcements: Changes to Guardrails for Amazon Bedrock

Guardrails for Amazon Bedrock, the safety filter for generative AI applications hosted on AWS, has new enhancements:

  • Users of Anthropic’s Claude 3 Haiku in preview can now fine-tune the model with Bedrock starting July 10.
  • Contextual grounding checks have been added to Guardrails for Amazon Bedrock, which detect hallucinations in model responses for retrieval-augmented generation and summarization applications.

In addition, Guardrails is expanding into the independent ApplyGuardrail API, with which Amazon businesses and AWS customers can apply safeguards to generative AI applications even if those models are hosted outside of AWS infrastructure. That means app creators can use toxicity filters, content filters and mark sensitive information that they would like to exclude from the application. Wynn said up to 85% of harmful content can be reduced with custom Guardrails.

Contextual grounding and the ApplyGuardrail API will be available July 10 in select AWS regions.

Guardrails for Amazon Bedrock allows customers to customize the content a generative AI model will embrace or avoid. Image: AWS

Contextual grounding for Guardrails for Amazon Bedrock is part of the wider AWS responsible AI strategy

Contextual grounding connects to the overall AWS responsible AI strategy in terms of the continued effort from AWS in “advancing the science as well as continuing to innovate and provide our customers with services that they can leverage in developing their services, developing AI products,” Wynn said.

“One of the areas that we hear often as a concern or consideration for customers is around hallucinations,” she said.

Contextual grounding — and Guardrails in general — can help mitigate that problem. Guardrails with contextual grounding can reduce up to 75% of the hallucinations previously seen in generative AI, Wynn said.

The way customers look at generative AI has changed as generative AI has become more mainstream over the last year.

“When we started some of our customer-facing work, customers weren’t necessarily coming to us, right?” said Wynn. “We were, you know, looking at specific use cases and helping to support like development, but the shift in the last year plus has ultimately been that there is a greater awareness [of generative AI] and so companies are asking for and wanting to understand more about the ways in which we’re building and the things that they can do to ensure that their systems are safe.”

That means “addressing questions of bias” as well as reducing security issues or AI hallucinations, she said.

Additions to the Amazon Q enterprise assistant and other announcements from AWS NY Summit

AWS announced a host of new capabilities and tweaks to products at the AWS NY Summit. Highlights include:

  • A developer customization capability in the Amazon Q enterprise AI assistant to secure access to an organization’s code base.
  • The addition of Amazon Q to SageMaker Studio.
  • The general availability of Amazon Q Apps, a tool for deploying generative AI-powered apps based on their company data.
  • Access to Scale AI on Amazon Bedrock for customizing, configuring and fine-tuning AI models.
  • Vector Search for Amazon MemoryDB, accelerating vector search speed in vector databases on AWS.

SEE: Amazon recently announced Graviton4-powered cloud instances, which can support AWS’s Trainium and Inferentia AI chips.

AWS hits cloud computing training goal ahead of schedule

At its Summit NY, AWS announced it has followed through on its initiative to train 29 million people worldwide on cloud computing skills by 2025, exceeding that number already. Across 200 countries and territories, 31 million people have taken cloud-related AWS training courses.

AI training and roles

AWS training offerings are numerous, so we won’t list them all here, but free training in cloud computing took place globally throughout the world, both in person and online. That includes training on generative AI through the AI Ready initiative. Wynn highlighted two roles that people can train for the new careers of the AI age: prompt engineer and AI engineer.

“You may not have data scientists necessarily engaged,” Wynn said. “They’re not training base models. You’ll have something like an AI engineer, perhaps.” The AI engineer will fine-tune the foundation model, adding it into an application.

“I think the AI engineer role is something that we’re seeing an increase in visibility or popularity,” Wynn said. “I think the other is where you now have people that are responsible for prompt engineering. That’s a new role or area of skill that’s necessary because it’s not as simple as people might think, right, to give your input or prompt, the right kind of context and detail to get some of the specifics that you might want out of a large language model.”

TechRepublic covered the AWS NY Summit remotely. 

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays