AI safety researchers leave OpenAI over prioritization concerns

Following the recent resignations, OpenAI has opted to dissolve the ‘Superalignment’ team and integrate its functions into other research projects within the organization.

Post from: News

Tags: OpenAI, AI safety, AGI, Ilya Sutskever, Jan Leike, Superalignment team, governance crisis, internal restructuring

Read More