This week, from the China Development High-level Forum, to the Boao Forum for Asia, to the Zhongguancun Forum, these forums focus on economy and technology. The development and discussions on artificial intelligence, humanoid robots, etc., have attracted particular attention.
On March 27, the 2025 Zhongguancun Forum Annual Meeting officially opened. This forum focusing on technology also sets up more content related to artificial intelligence. Now, what stage has the development of artificial intelligence reached? On the basis of the current situation, how can we make artificial intelligence continue to move forward? While developing, how can we solve the security problem? "News 1+1" connects Huang Minlie, professor of the Artificial Intelligence Institute of Tsinghua University, and Zhang Linghan, professor of the Data Rule of Law Institute of China University of Political Science and Law, to bring analysis and interpretation.
What are the changes in the focus of artificial intelligence?
Professor Huang Minlie, Artificial Intelligence Research Institute of Tsinghua University: I think there are two hot topics for the changes this year:
1. Embodied Intelligence. This year's embodied intelligence is indeed the most important research hot topic and industrial layout direction in addition to generative big models. Investment and financing in this direction is very active, because embodied intelligence is a combination of a relatively embodied big model and robot, which can achieve very high visibility application scenarios such as humanoid robots, autonomous driving, etc. that we see today.
2. Another very important change is AI for Sciences (forming artificial intelligence-driven scientific research). Why is AI for Sciences becoming very important now? It is because of the big models in the past that they might have to chat with others, write official documents, and do some document operations, but can AI help you innovate? Make solutions to some major scientific discoveries, such as exploring how to discover new materials, and solving some very important basic scientific problems, such as medical research, and even life science research. This direction is actually the future. How can we truly use AI to create new things, discover new knowledge, and then explore some important scientific discoveries that humans have not explored in the past? This is a very important focus of our attention.
This year at the Zhongguancun Forum, these two points were also well reflected.
What is the key to developing artificial intelligence applications in industrial scenarios?
Huang Minlie: I think the application of artificial intelligence in the future will be a very important direction for industrial scenarios and the most important breakthrough direction for realizing new quality productivity. Because artificial intelligence applications in industrial scenarios require high reliability, we need to solve three important problems: 1. Can our intelligence be improved to a higher level? Although the current big model is already very intelligent in many aspects, our data in industrial scenarios must be deeply integrated with the actual work flow. Can our intelligence achieve a higher level in this combination?
2. Its reliability problem is relatively small in industrial scenarios. Once errors occur, it may cause destructive effects, so it is necessary to have a relatively high reliability. In terms of reliability, it is also a very important direction in current artificial intelligence research.
3. Security issue. Security means that we have applied a lot of new generation of artificial intelligence technologies, which actually has a profound impact on the relationship between our entire person and AI, as well as what kind of impact AI will have on the development of the entire society. In fact, this is a governance issue. Security and governance are also important points that need to be paid attention to in the future.
How to deal with the risks and security issues in the development of artificial intelligence?
Professor Zhang Linghan, Data Rule of Law Research Institute of China University of Political Science and Law: In fact, we have fully taken into account the risk issues of AI in the formulation of regulatory laws and regulations. In September last year, my country promulgated the "Artificial Intelligence Security Governance Framework", which divided the risks of artificial intelligence into two categories - endogenous risks and application risks, which are actually the technical risks and abuse risks of AI. Regarding the governance of endogenous risks, the principle we put forward is to "govern technology with technology" and use technology to control technology. In the stage of research and development and deployment, artificial intelligence technology must be fully evaluated, aligned in value, and continuously monitored operation. Regarding the risks of application, we should not only pay attention to the legal governance and supervision of AI, but also fully formulate and consider the laws of some industries related to AI, such as autonomous driving, smart medical care, AI companionship, etc., to fully make a comprehensive plan.