일론머스크가 후원하는 비영리 단체인 Future of Life Institue에서 'GPT-4'보다 강력한 인공지능 시스템 트레이닝을 최소 6개월 이상 중지할것을 요구했다.
간략한 내용으로는 다음과 같다.
- 인간과 경쟁이 가능한 AI는 사회와 인류에 심각한 위험을 초래할 수 있다.
- 지금은 아무도(개발자를 포함해서) 이를 예측하거나 안정적으로 제어할 수 있는 방안에 대한 계획과 관리가 이루어지지 않고 있다.
- GPT-4보다 더 강력한 인공지능 시스템의 학습을 최소 6개월동안 중단해야 하며, 이 조치가 신속하게 시행될 수 없다면 정부가 나서야 한다.
- AI연구소와 전문가들은 이 6개월의 공백을 활용하여 일련의 세이프티 프로토콜을 공동으로 개발하고 구현해야 한다.
- 동시에 개발자들은 정책 입안들과 협력하여 AI 거버넌스 시스템을 개발해야 한다.
서명에는 일론머스크, 스티브워즈니악, 유발하라리 등 유명한 인사들이 포함되어 있다.
일련에서는, AI를 민간에서 다루는 핵폭탄처럼 생각하고 있는것 같다.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
'경제전망, 증시뉴스, 이슈정리' 카테고리의 다른 글
난 10년간 중국의 1인당 GDP가 약 6000달러 늘어나는 동안 미국은 20000달러가 늘어남. (2) | 2023.04.09 |
---|---|
열심히 일하는데 제대로 평가받지 못하는 이유는 대개 두가지이다. (9) | 2023.04.03 |
소울류 게임 종가는 프롬소프트웨어 (0) | 2023.04.02 |
완전히 다른 배경에 다른 방식으로 커리어를 쌓아왔지만 (0) | 2023.04.02 |
<대유쾌마운틴에 도착하셨습니다> (0) | 2023.04.02 |