Home Coding Latest OpenAI’s Announcements

Latest OpenAI’s Announcements

184

New ways to manage your data in ChatGPT

ChatGPT users can now turn off chat history, allowing you to choose which conversations can be used to train our models.

We’ve introduced the ability to turn off chat history in ChatGPT. Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar. These controls, which are rolling out to all users starting today, can be found in ChatGPT’s settings and can be changed at any time. We hope this provides an easier way to manage your data than our existing opt-out process. When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting. https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt

Introducing the Bug Bounty Program

This initiative is essential to our commitment to develop safe and advanced AI. As we create technology and services that are secure, reliable, and trustworthy, we need your help.

The OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure. We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone. Bug Bounty Program page.

Our approach to AI safety

Prior to releasing any new system we conduct rigorous testing, engage external experts for feedback, work to improve the model’s behavior with techniques like reinforcement learning with human feedback, and build broad safety and monitoring systems.

For example, after our latest model, GPT-4, finished training, we spent more than 6 months working across the organization to make it safer and more aligned prior to releasing it publicly.

We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take. https://openai.com/blog/our-approach-to-ai-safety

ChatGPT Plugins

We’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services. In line with our iterative deployment philosophy, we are gradually rolling out plugins in ChatGPT so we can study their real-world use, impact, and safety and alignment challenges—all of which we’ll have to get right in order to achieve our mission. https://openai.com/blog/chatgpt-plugins

AWS Cloud Credit for Research
Next articleSeeing is Not Always Believing: Decoding the Reality Behind AI-Generated Images
Clara Prieto García is the accomplished Editor of AIResearchNews.com, a leading online news platform dedicated to artificial intelligence research and advancements. With a Master's degree in Information Technology from the Universidad Politécnica de Madrid and over 20 years of experience in journalism, Clara has become a respected figure in the AI community. She has a keen eye for identifying groundbreaking AI developments and is committed to making this knowledge accessible to a global audience. Under her leadership, AIResearchNews.com has garnered a reputation for its comprehensive coverage and insightful analysis of the AI landscape.

LEAVE A REPLY

Please enter your comment!
Please enter your name here