zlacker

[parent] [thread] 3 comments
1. happyt+(OP)[view] [source] 2023-05-16 20:23:41
Hear, hear. Excellent point, and I don't mean to imply it shouldn't be regulated. However, it has been my general experience that concentrating immense power in governments doesn't typically lead to more security, so perhaps we just have a difference of philosophy.

Democracy will not withstand AI when it's fully developed. Let me offer a better written explanation of my general views than I could ever muster up for a comment on HN in the form of a quote from an article by Dr. Thorsten Thiel (Head of the Research Group "Democracy and Digitaliziation" at the Weizenbaum Institute for the Networked Society):

> The debate on AI’s impact on the public sphere is currently the one most prominent and familiar to a general audience. It is also directly connected to long-running debates on the structural transformation of the digital public sphere. The digital transformation has already paved the way for the rise of social networks that, among other things, have intensified the personalization of news consumption and broken down barriers between private and public conversations. Such developments are often thought to be responsible for echo-chamber or filter-bubble effects, which in turn are portrayed as root causes of the intensified political polarization in democracies all over the world. Although empirical research on filter bubbles, echo chambers, and societal polarization has convincingly shown that the effects are grossly overestimated and that many non-technology-related reasons better explain the democratic retreat, the spread of AI applications is often expected to revive the direct link between technological developments and democracy-endangering societal fragmentation.

> The assumption here is that AI will massively enhance the possibilities for analyzing and steering public discourses and/or intensify the automated compartmentalizing of will formation. The argument goes that the strengths of today's AI applications lie in the ability to observe and analyze enormous amounts of communication and information in real time, to detect patterns and to allow for instant and often invisible reactions. In a world of communicative abundance, automated content moderation is a necessity, and commercial as well as political pressures further effectuate that digital tools are created to oversee and intervene in communication streams. Control possibilities are distributed between users, moderators, platforms, commercial actors and states, but all these developments push toward automation (although they are highly asymmetrically distributed). Therefore, AI is baked into the backend of all communications and becomes a subtle yet enormously powerful structuring force.

> The risk emerging from this development is twofold. On the one hand, there can be malicious actors who use these new possibilities to manipulate citizens on a massive scale. The Cambridge Analytica scandal comes to mind as an attempt to read and steer political discourses (see next section on electoral interference). The other risk lies in a changing relationship between public and private corporations. Private powers are becoming increasingly involved in political questions and their capacity to exert opaque influences over political processes has been growing for structural and technological reasons. Furthermore, the reshaping of the public sphere via private business models has been catapulted forward by the changing economic rationality of digital societies such as the development of the attention economy. Private entities grow stronger and become less accountable to public authorities; a development that is accelerated by the endorsement of AI applications which create dependencies and allow for opacity at the same time. The ‘politicization’ of surveillance capitalism lies in its tendency, as Shoshana Zuboff has argued, to not only be ever more invasive and encompassing but also to use the data gathered to predict, modify, and control the behavior of individuals. AI technologies are an integral part in this ‘politicization’ of surveillance capitalism, since they allow for the fulfilment of these aspirations. Yet at the same time, AI also insulates the companies developing and deploying it from public scrutiny through network effects on the one hand and opacity on the other. AI relies on massive amounts of data and has high upfront costs (for example, the talent required to develop it, and the energy consumed by the giant platforms on which it operates), but once established, it is very hard to tame through competitive markets. Although applications can be developed by many sides and for many purposes, the underlying AI infrastructure is rather centralized and hard to reproduce. As in other platform markets, the dominant players are those able to keep a tight grip on the most important resources (models and data) and to benefit from every individual or corporate user. Therefore, we can already see that AI development tightens the grip of today’s internet giants even further. Public powers are expected to make increasing use of AI applications and therefore become ever more dependent on the actors that are able to provide the best infrastructure, although this infrastructure, for commercial and technical reasons, is largely opaque.

> The developments sketched out above – the heightened manipulability of public discourse and the fortification of private powers – feed into each other, with the likely result that many of the deficiencies already visible in today’s digital public spheres will only grow. It is very hard to estimate whether these developments can be counteracted by state action, although a regulatory discourse has kicked in and the assumption that digital matters elude the grasp of state regulation has often been proven wrong in the history of networked communication. Another possibility would be a creative appropriation of AI applications through users whose democratic potential outweighs its democratic risks thus enabling the rise of differently structured, more empowering and inclusive public spaces. This is the hope of many of the more utopian variants of AI and of the public sphere literature, according to which AI-based technologies bear the potential of granting individuals the power to navigate complex, information-rich environments and allowing for coordinated action and effective oversight (e.g. Burgess, Zarkadakis).

Source: https://us.boell.org/en/2022/01/06/artificial-intelligence-a...

Social bots and deep fakes will be so good so quickly — the primary technologies being talked about in terms of how Democracy can survive — I doubt there will be another election without extensive use of these technologies in a true plethora of capacities from influence marketing to outright destabilization campaigns. I'm not sure what Government can deal with a threat like that, but I suspect the recent push to revise tax systems and create a single global standard for multinational taxation recently the subject of an excellent talk at the WEF are more than tangentially related to the AI debate.

So, is it a transformational technology that will liberate mankind of a nuclear bomb? Because ultimately, this is the question in my mind.

Excellent comment, and I agree with your sentiment. I just don't think concentrating control of the technology before it's really developed is wise or prudent.

replies(2): >>vortex+f3 >>downWi+tm
2. vortex+f3[view] [source] 2023-05-16 20:42:05
>>happyt+(OP)
*Hear, hear.
replies(1): >>happyt+g4
◧◩
3. happyt+g4[view] [source] [discussion] 2023-05-16 20:47:03
>>vortex+f3
Thank you. Corrected.
4. downWi+tm[view] [source] 2023-05-16 22:32:33
>>happyt+(OP)
It's possible that the tsunami of fakes is going to break down trust in a beneficial way where people only believe things they've put effort into verifying.
[go to top]