Green Island Public Opinion | In the tide of AI empowering government affairs, don’t let empowerment become negative energy

With the widespread application of AI technology in the field of government affairs, it brings both convenience and challenges.
Core content:
1. The application boom of AI technology in the field of government affairs and its actual effects
2. The "digital divide" problem and its impact on special groups
3. Concerns about data security and credibility, and over-reliance on the role of AI
Since the Spring Festival, large model technologies represented by DeepSeek have become a hot topic globally. The deep embedding of AI into social life has become a general trend. At the same time, the enthusiasm of various places to explore the application of large model technology in government affairs is unstoppable. According to incomplete statistics, since the Spring Festival holiday, at least 10 provinces have announced the access to DeepSeek and applied it in government office, government services, urban governance and other fields, such as "Shenzhen fully uses DeepSeek to improve the level of intelligence in government services" and "Liaoning 12345 hotline platform system has officially connected to DeepSeek". In addition, "Guangxi County Party Secretary requires cadres to use DeepSeek", and netizens on social media platforms share the use of DeepSeek to complete work summaries, experiences and other content that continue to attract attention. But at the same time, these discussions have also caused concerns about data security, service capabilities and employment crises.
DeepSeek access to government services in some provinces and cities
1. The "digital divide" issue should be paid more attention to during the use of the service by the public. In recent years, the issue of "intelligent customer service is not intelligent" has frequently aroused heated discussions. The problems of "playing the lute to a cow" in answering questions, "deep hiding" in the manual interface, and "inaccurate recognition" of problems encountered orally have been widely complained. The complex electronic operation process and the difference in the recognition ability of dialects and slang have not made "AI services" more convenient, but have increased the cost of doing things for some people. The actual use effect after accessing the intelligent big model is not good, and even the problems of "AI passing the buck" are more likely to cause public opinion hype.
At the same time, in a highly "intelligent" social scenario, special groups such as the elderly may find it difficult to adapt to electronic and intelligent operation processes, and their daily channels for seeking help may be further squeezed. Such "digital divide" issues should be taken more seriously.
2. Data leakage risks cause concern, and the credibility of government services is easily affected. In discussions around the use of AI, the risk of information leakage is most likely to cause concern. On the one hand, under the trend of "requiring leading cadres to use DeepSeek", the risk of "shadow AI" that arbitrarily shares sensitive content with public AI tools is more likely to spread to government departments, and the risk of government-related information leakage will be further increased.
In addition, since government departments hold a large amount of sensitive data on people's livelihood, whether it is leaked manually or by malicious attacks, the leakage of people's personal information can easily exacerbate distrust of government departments.
3. Over-reliance on and exaggeration of the role of AI can easily lead to public opinion backlash. In the overwhelming media propaganda and public opinion discussions, the phenomenon of over-deifying AI capabilities should be paid more attention. First, after government services are connected to AI, the actual use effect is inconsistent with the convenience of publicity, which can easily lead to complaints and doubts, and even generate formalistic disputes of "AI for the sake of AI".
Second, there is still a risk of "fabricating non-existent cases or data problems" when using large models. Over-reliance on AI may lead to audit and decision-making errors, or even the dissemination of false political information, which is more sensitive.
Third, over-emphasizing the work capabilities and effects after AI is integrated can easily cause the public to have a negative feeling that "public officials are not working". Discussions such as "AI replacing civil servants" can easily extend to sensitive topics such as employment anxiety.
1. Continuously improve the effect of technology application and avoid formalism disputes. When the integration of government services into AI becomes a trend, we should be vigilant against sensitive topics such as "intelligent customer service is not intelligent", "AI passes the buck", and "technology takes the blame" in actual applications, which may raise questions about the necessity of "government AI". Especially in window service units, if AI is deployed but not used, or it is difficult for the elderly or special groups to use it, it is more likely to be criticized by public opinion. It is recommended to continue to pay attention to usage feedback, optimize the scope of use and user experience, and continuously improve usability to avoid "AI government affairs" becoming a political achievement project and generating "formalism" doubts.
2. Strengthen data security protection and improve staff's information security awareness. In the application of government AI, security issues should be the focus of attention. On the one hand, it is recommended to continuously improve the sensitive data protection mechanism from a technical perspective, such as ensuring that the entire process of AI calling data is traceable and auditable to reduce the risk of data leakage. On the other hand, regular security training should be carried out for public officials at all levels and of all types, not only to prevent staff from deliberately leaking information, but also to eliminate unconscious operational risks such as using public AI to process confidential documents and uploading un-massified raw data.
3. Grasp the final checkpoint of manual review and strictly prevent the risk of AI fabrication and misjudgment. The seriousness of government services and the complexity of policy interpretation determine that AI can be an "assistant" but is not suitable for "making judgments". At present, AI generative models may produce fictitious information or wrong conclusions due to the ambiguity or errors in training data. This phenomenon is called AI "hallucination" and is widespread. Therefore, when using government AI, we should do a good job of reviewing and proofreading AI-generated content, such as establishing a government industry knowledge base, cross-reviewing AI-generated content, and human intervention is needed for sensitive work such as policy interpretation and funding approval. In particular, we must avoid the lazy thinking that "AI generation is correct" and strictly prevent the direct use of AI from causing incorrect interpretation and fictitious content.
Deepseek can stand out from a large number of big data models because of its deep thinking path starting from the user. Government services are different from other scenarios. The introduction of AI is not about "who runs faster" but "who thinks more deeply". The upgrade of technology can free government workers from repetitive and mechanical assembly line work, but when they look up, should they solve the urgent needs of the people faster and more accurately, or use standard answers instead of face-to-face communication and be a "hands-off shopkeeper"?
How to use AI to better build a service-oriented government requires people who truly care about the people to take the helm. After all, no matter how standard the answer is, it may not be able to replace the phrase "I'll help you think of a solution."