Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

For anyone who they fear theirs ChatGPT and Codex accounts can be controlled by attackers, OpenAI announced Thursday that it is adding a new account protection system that adds additional security. Called Advanced Account Security, the feature enforces login controls that can make account hijacking more difficult.
Such standards are not new ideas when it comes to account security. Googlefor example, it has provided a security feature for its Advanced Protection account about ten years. But as AI applications proliferate around the world, there is a need for more security measures to be implemented. OpenAI says the implementation is part of it many cybersecurity measures announced earlier this month.
“People are turning to AI for deeper questions and more and more tasks,” the company said Thursday in a statement. blog post. “Over time, the ChatGPT account can contain personal and professional information, and be among the tools connected to the administration. For some people, such as journalists, elected officials, political opponents, researchers, and those who care about security, the implications are very serious.”
People who turn on Advanced Account Security can no longer use default passwords for their accounts. Instead, they should add two physical security keys or keys greatly reducing the risk of phishing success. This feature also removes email and SMS messages and account recovery options. Instead, users should use recovery keys, backup keys, or physical security keys. OpenAI says it has partnered with Yubico to offer affordable YubiKey bundles to Advanced Account Security users.
Courtesy of OpenAi
Most importantly, when the user turns on Advanced Account Security, they can no longer seek help from the OpenAI support team to restore the account, because the support will no longer have access to or control any recovery methods. In this way, attackers will not try to log into the account by checking support portals and social media attacks.
Advanced Account Security also installs login screens and short sessions before the user logs in to the device. And it generates notifications every time someone logs into a closed account, and points to the board to review ChatGPT and Codex sessions. Additionally, while OpenAI offers any user the option to opt out of having their ChatGPT conversations used for model training, this opt-out is only done by default for Advanced Account Security users.
Members of OpenAI’s Trusted Access for Cyber program, which gives cybersecurity experts, researchers, and others access to new models, will be required to support Advanced Account Security starting June 1 or provide other proof that they are using authentication that is not phishing-proof through individual businesses.