View all AI news articles

Amazon's New Leap in AI: Bringing Human Insight into AI Model Evaluation

May 17, 2024

Amazon's Innovative Approach to AI Model Testing

Recently, Amazon has introduced an intriguing concept: human benchmarking teams to test AI models. This initiative is not just about evaluating AI models before use; it's a significant step towards integrating human insight into the technological realm. At the heart of this move is Amazon's desire to enhance how users evaluate AI models, advocating for more human involvement in the process.

During the renowned AWS re:Invent conference, Swami Sivasubramanian, AWS vice president of database, analytics, and machine learning, unveiled 'Model Evaluation on Bedrock.' This new feature is now accessible in preview on Amazon's Bedrock platform. The absence of a transparent testing method for models often leads developers to select ones that might not be precise enough for specific tasks, like question-and-answer projects, or too hefty for their requirements.

Sivasubramanian emphasizes, “Model selection and evaluation is not just a one-time activity, but a continual process.” The incorporation of a human element is vital, he notes, offering an efficient way to manage human evaluation workflows and gauge model performance.

The Challenge of Choosing the Right Model

In a revealing interview with The Verge, Sivasubramanian noted that developers often struggle with choosing the appropriate model size for their projects. Many assume that a more powerful model would automatically meet their needs, only to realize later that a smaller one would have sufficed.

Model Evaluation on Bedrock consists of two parts: automated evaluation and human evaluation. The automated segment allows developers to select and test a model through the Bedrock console, evaluating its performance on various metrics like robustness, accuracy, or toxicity. This is especially useful for tasks like summarization, text classification, and text generation. Notably, Bedrock includes well-known third-party AI models such as Meta's Llama 2, Anthropic's Claude 2, and Stability AI's Stable Diffusion.

AWS offers test datasets for this purpose, but customers also have the option to bring their own data to better understand how the models behave in specific contexts. This results in a comprehensive report.

The Human Touch in AI Evaluation

When human evaluation comes into play, users can either collaborate with an AWS human evaluation team or utilize their own. Customers define the task type, evaluation metrics, and the dataset they wish to use. AWS provides customized pricing and timelines for those opting for its assessment team.

Vasi Philomin, AWS vice president for generative AI, stressed in another interview with The Verge the importance of understanding model performance to guide development. This process also enables companies to determine if models align with responsible AI standards, such as toxicity levels.

“It’s crucial for models to work effectively for our customers, helping them identify the most suitable model, and we’re committed to enhancing their evaluation capabilities,” Philomin explained.

Sivasubramanian also pointed out that human evaluators can detect aspects that automated systems might miss, such as empathy or friendliness in AI models.

Optional Yet Beneficial Benchmarking

Philomin clarified that AWS does not mandate all customers to benchmark models. Some developers, already familiar with Bedrock's foundational models or confident in their understanding of the models’ capabilities, might not need this service. However, companies still exploring various models could gain significantly from this benchmarking process.

Currently, as the benchmarking service is in its preview phase, AWS will only charge for the model inference used during evaluations.

While there's no universal standard for benchmarking AI models, there are industry-specific metrics commonly accepted. Philomin notes that the goal of Bedrock benchmarking is not broad evaluation but to provide companies with a means to assess the impact of a model on their specific projects.

In summary, Amazon's latest initiative in AI model evaluation is a blend of technological precision and human insight, offering a comprehensive and nuanced approach to understanding and selecting AI models. This innovative step not only enhances model selection but also embeds a crucial human element in the rapidly evolving AI landscape.

Recent articles

View all articles