urbanists.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a server for people who like bikes, transit, and walkable cities. Let's get to know each other!

Server stats:

530
active users

#PoliticalWill

0 posts0 participants0 posts today
The Risks of Artificial Intelligence and Ways to Protect Ourselves<p><strong>Possible Causes and Consequences of Early Limitations of Artificial&nbsp;Intelligence</strong></p><p>Possible Causes and Consequences of Early Limitations of Artificial Intelligence</p><p>Introduction</p><p>Imagine a world where the benefits of artificial intelligence—medical diagnoses, personalized education, automated translation, or accelerated scientific discovery—are available only to a privileged few. This is not the opening image of a dystopian science fiction novel, but a real-world scenario toward which current trends point. While ChatGPT, Claude, and other advanced AI systems revolutionize our daily lives, forces are already forming in the background that could limit our access to them.</p><p>The development of artificial intelligence (AI) has accelerated exponentially over the past decade. What was a curiosity in the early 2010s has now become an everyday tool – but it is precisely this rapid spread and the vast range of possibilities it offers that make it likely that various interest groups will seek to restrict the technology. Whether for economic, security or political reasons, the regulation of artificial intelligence seems inevitable.</p><p>The question before us is not whether there will be restrictions, but in what form and whose interests will they serve. Restricting AI is not necessarily negative in itself – some regulation is essential for safe development – ​​the danger lies in the fact that these restrictions are implemented unequally, non-transparently or without democratic control.</p><p>In this paper, we examine why access to artificial intelligence is expected to be restricted, what dangers these restrictions may entail, and what we can do to ensure that AI remains useful and accessible to everyone in the coming decades.</p><p>Why will artificial intelligence be restricted?</p><ol><li>Resource usage and energy consumption</li></ol><p>AI, especially large language models, requires huge resources and energy. Running AI systems is expensive, and with limited resources in energy markets, access to AI may need to be regulated as part of measures to reduce energy consumption.</p><p>Specific examples:</p><ul><li>Google DeepMind’s AlphaGo development used thousands of specialized TPU (Tensor Processing Unit) chips, which consume significant amounts of energy to operate.</li><li>Training the OpenAI GPT-4 model is estimated to have consumed over 25 gigawatt hours of electricity, equivalent to the annual consumption of approximately 2,000-3,000 US households.</li><li>In 2023, Ireland has already indicated that it will have to limit the construction of data centers due to the congestion of the energy grid, which will indirectly affect the expansion of AI services in the country.</li></ul><ol><li>Security and Ethical Considerations</li></ol><p>As AI becomes more advanced, systems like ChatGPT raise serious concerns about user protection, data security, and potentially dangerous applications. The rapid development of technology carries risks, and regulation is necessary to ensure responsible use.</p><p>Specific examples:</p><ul><li>The European Union adopted the AI ​​Act in 2023, which classifies AI applications into risk categories and imposes strict requirements on “high-risk” categories.</li><li>China introduced the “Management Measures for Generative AI Services” in 2022, which imposes strict content filtering and compliance requirements.</li><li>In October 2023, the United States issued the Biden Executive Order on AI, which set security standards for developers of larger models.</li><li>OpenAI temporarily restricted certain features of its own ChatGPT service during political campaign periods to prevent the spread of disinformation.</li></ul><ol start="3"><li>Political and economic interests</li></ol><p>AI may increasingly fall into the hands of wealthier companies, countries and power structures, who may restrict access to serve their own interests. The concentration of power in AI, according to some, threatens equal opportunities and may leave large segments of society vulnerable.</p><p>Specific examples:</p><ul><li>The development of the most advanced AI models is currently concentrated in the hands of just a few large tech companies (OpenAI/Microsoft, Google, Anthropic, Meta), who control who can access their technologies and how.</li><li>Russia restricted the availability of Western AI services in 2022, while supporting domestically developed alternatives (e.g. Yandex Alice).</li><li>The United States restricted the export of advanced chips and AI technologies to China in 2023, citing geopolitical interests.</li><li>Meta (formerly Facebook) restricted the availability of its Llama 2 model in certain countries, including China, Iran, and North Korea, in 2023.</li></ul><p>Risks of restrictions</p><ol><li>Access disparities</li></ol><p>If access to AI becomes restricted, only companies or individuals with the right resources and political influence can benefit from it. This can further widen social and economic disparities, and narrow the access to knowledge and innovation.</p><p>Specific examples:</p><ul><li>According to Stanford University’s “AI Index,” there are already significant disparities in AI research capacity: low-income countries account for less than 1% of global AI publications.</li><li>The pricing of advanced AI tools often excludes smaller businesses: for example, using the GPT-4 API in 2023 was 10-30 times more expensive than using previous-generation models.</li><li>Some language communities and countries already have limited access to AI systems: for example, OpenAI’s services were not available in more than 20 countries, including Russia, Iran and North Korea, until 2024.</li></ul><ol><li>Political manipulation</li></ol><p>Technological advances in artificial intelligence can be used to strengthen political power, manipulate or even control public opinion. If AI is controlled exclusively by certain powerful groups, there is a risk that the public will not be able to see clearly what is happening in the world.</p><p>Specific examples:</p><ul><li>The Cambridge Analytica scandal showed how advanced data analysis and targeted messaging can be used for political manipulation, and AI systems offer even more sophisticated tools for this.</li><li>China has introduced widespread facial recognition systems and a social credit system that uses AI to strengthen social control.</li><li>As deepfake technology develops, it is increasingly difficult to distinguish real content from fakes, which threatens democratic processes – in 2023, a fake audio recording of a candidate was circulated before the Slovak elections.</li><li>In Azerbaijan, it has been documented that AI-based surveillance technologies were used to identify and track political activists.</li></ul><ol start="3"><li>Development slowdown</li></ol><p>Limited access can also cause problems in the field of research and development. If only a narrow group can use and develop AI, the pace of innovation can slow down, and the development of the technology will not reach the potential that a wider, democratic application could provide.</p><p>Specific examples:</p><ul><li>When OpenAI temporarily restricted access to the DALL-E image generation system in 2022, this slowed the development of related creative fields, as researchers and developers were unable to experiment with the technology.</li><li>Open source AI projects such as BLOOM and Stable Diffusion have proven that wider access leads to faster innovation – they have engaged thousands of community developers in a short time.</li><li>Research centers in developing countries, such as Deep Learning Indaba in Africa, often face barriers to access to the most advanced AI tools, which limits innovation aimed at solving local problems.</li><li>In the field of healthcare AI applications, strict data protection regulations, while necessary, have significantly slowed the development and deployment of medical diagnostic AI systems.</li></ul><p>What awaits us if the wealthier classes control AI?</p><p>If AI is only available to the wealthiest companies and individuals, the consequences could be far-reaching. Access to innovation benefits, exploitation of business advantages and concentration of power could all lead to more people and communities being excluded from technological progress. This could lead to increased tensions, increased social inequalities and ultimately to an imbalance in economic and political power.</p><p>Specific examples:</p><ul><li>AI-based automation is already spreading primarily in industries where companies can afford to invest – according to a McKinsey report, this is leading to a growing wage gap between skilled and unskilled workers.</li><li>In stock market trading, large financial institutions are already using AI-based algorithmic trading systems that can react in fractions of a second, giving smaller investors an advantage that they do not have.</li><li>In healthcare, AI-based precision medicine is increasingly only available to wealthier countries and higher-income populations, further widening the health gap.</li><li>In agriculture, AI tools for precision farming are mainly available to large landowners, increasing the competitive disadvantage of small farmers.</li></ul><p><strong>What can we do to ensure the appropriate and democratic use of AI?</strong></p><ol><li>Regulation and ethical guidelines</li></ol><p>It is important to regulate the use of AI globally. Ethical guidelines and international cooperation are key to ensuring that the development and application of AI benefits everyone, and not just a select few.</p><p>Specific examples:</p><ul><li>UNESCO adopted the “Recommendation on the Ethics of Artificial Intelligence” in 2021, which provides a global framework for the ethical development and application of AI.</li><li>The Partnership on AI brings together over 100 organizations (including tech giants, NGOs and academic institutions) to work together on the responsible development of AI.</li><li>Finland’s free online course “Elements of AI” has already provided basic AI knowledge to over 750,000 people in 170 countries, democratizing access to knowledge.</li><li>The EU AI Act has defined specific risk categories and set different regulatory requirements for them – this model can serve as an international example.</li></ul><ol><li>Transparency and education</li></ol><p>Raising education and awareness about artificial intelligence is essential. If people understand how AI works and how it affects their daily lives and decisions, they are better equipped to participate meaningfully in shaping the technology.</p><p>Specific examples:</p><ul><li>The Mozilla Foundation’s “Trustworthy AI” initiative explains the workings and risks of AI to ordinary people in a way that is easy to understand.</li><li>Singapore’s National AI Strategy includes a comprehensive education program that extends from primary school to adult education.</li><li>The AlgoTransparency project publicly examines and documents the workings and impacts of algorithms on big tech platforms.</li><li>The “Machine Learning for Kids” project teaches children the principles of AI in a simple, playful way, preparing the next generation to use technology responsibly.</li></ul><ol><li>Developing social and community applications</li></ol><p>Rather than AI being used exclusively by large corporations and the wealthy, it is important to also prioritize social and community applications. Active participation in the development of AI can help ensure that all segments of society have access to the benefits of technology.</p><p>Specific examples:</p><ul><li>Rainforest Connection uses AI to detect the sounds of illegal logging in rainforests, demonstrating how the technology can be used for environmental protection purposes.</li><li>The AI-based features of the App “Be My Eyes” help visually impaired people with everyday activities, setting an example of inclusive technology development.</li><li>The Common Voice project, with support from Mozilla, is building an open source voice database in different languages, democratizing the development of voice recognition AI for less-spoken languages.</li><li>Citizen Science projects such as Foldit, where players help solve protein structures, show how ordinary people can be involved in AI research.</li></ul><p>Summary<br>Limiting AI seems inevitable for many reasons, be it resource constraints, security concerns or economic interests. However, these limitations carry significant risks, especially if they lead to unequal access, political manipulation or a slowdown in innovation.</p><p>Our most important task is to find a balance that ensures both the safe development of technology and democratic access. This is only possible if we base regulation on thoughtful ethical guidelines, invest significant resources in education and awareness-raising, and support AI applications that aim to increase social well-being.</p><p>The issue of limiting AI is not simply a technological one, but a fundamentally socio-political challenge that forces us to rethink what kind of future we want to build and how we can ensure that the benefits of technological advancement are available to everyone.</p><p><span></span></p><p><a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://aihumancoexist.wordpress.com/tag/abuse/" target="_blank">#abuse</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://aihumancoexist.wordpress.com/tag/artificial-intelligence/" target="_blank">#ArtificialIntelligence</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://aihumancoexist.wordpress.com/tag/artificial-intelligence-restrictions/" target="_blank">#artificialIntelligenceRestrictions</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://aihumancoexist.wordpress.com/tag/democracy/" target="_blank">#democracy</a> <a rel="nofollow noopener noreferrer" class="hashtag u-tag u-category" href="https://aihumancoexist.wordpress.com/tag/political-will/" target="_blank">#politicalWill</a></p>
Andrea Learned<p><span class="h-card" translate="no"><a href="https://climatejustice.social/@WBOrcutt" class="u-url mention">@<span>WBOrcutt</span></a></span> <span class="h-card" translate="no"><a href="https://med-mastodon.com/@hannu_ikonen" class="u-url mention">@<span>hannu_ikonen</span></a></span> <span class="h-card" translate="no"><a href="https://masto.ai/@rbreich" class="u-url mention">@<span>rbreich</span></a></span> | Another great way to find some energy and hope in exciting local leader <a href="https://urbanists.social/tags/politicalwill" class="mention hashtag" rel="tag">#<span>politicalwill</span></a> around climate action is to listen to the amazing folks I&#39;ve interviewed for my Living Change climate leadership podcast (I talk with lawmakers from Culver City and Baltimore, and more, and corporate and cultural influencers. Each interview gave me HOPE and energy to continue the push. <a href="https://link.cohostpodcasting.com/2cb109d3-1bf3-4c8a-abcd-c2d0a6c17db9?d=sfaGM4PgE" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://</span><span class="ellipsis">link.cohostpodcasting.com/2cb1</span><span class="invisible">09d3-1bf3-4c8a-abcd-c2d0a6c17db9?d=sfaGM4PgE</span></a></p>
Andrea Learned<p><span class="h-card" translate="no"><a href="https://mastodon.social/@dmoser" class="u-url mention">@<span>dmoser</span></a></span> I interviewed 5 local political leaders with <a href="https://urbanists.social/tags/politicalwill" class="mention hashtag" rel="tag">#<span>politicalwill</span></a> to be seen <a href="https://urbanists.social/tags/LivingChange" class="mention hashtag" rel="tag">#<span>LivingChange</span></a> and having a real climate influence for my podcast. Most recent episode - Baltimore/MD Delegate Robbyn Lewis tells a very energizing social justice story. She is <a href="https://urbanists.social/tags/carfree" class="mention hashtag" rel="tag">#<span>carfree</span></a> herself. We need to amplify the stories of those leaders who ARE living these values in their own lives to change <a href="https://urbanists.social/tags/leadership" class="mention hashtag" rel="tag">#<span>leadership</span></a> social norms: <a href="https://link.cohostpodcasting.com/2cb109d3-1bf3-4c8a-abcd-c2d0a6c17db9?d=sfaGM4PgE" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://</span><span class="ellipsis">link.cohostpodcasting.com/2cb1</span><span class="invisible">09d3-1bf3-4c8a-abcd-c2d0a6c17db9?d=sfaGM4PgE</span></a></p>
Andrea Learned<p>Wisdom from Gaia Vince in The Guardian: </p><p>&quot;We need honesty from our leaders about what our choices are and what the trade-offs will be for each of us. There are no easy options now, but there are still plenty of choices for us to discuss, debate and democratically decide on.&quot;</p><p><a href="https://www.theguardian.com/commentisfree/2023/apr/11/climate-breakdown-climate-crisis-solutions-idea" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">theguardian.com/commentisfree/</span><span class="invisible">2023/apr/11/climate-breakdown-climate-crisis-solutions-idea</span></a> <a href="https://urbanists.social/tags/climate" class="mention hashtag" rel="tag">#<span>climate</span></a> <a href="https://urbanists.social/tags/politicalwill" class="mention hashtag" rel="tag">#<span>politicalwill</span></a> <a href="https://urbanists.social/tags/leadership" class="mention hashtag" rel="tag">#<span>leadership</span></a></p>
Andrea Learned<p>Just dropped a new <a href="https://urbanists.social/tags/LivingChange" class="mention hashtag" rel="tag">#<span>LivingChange</span></a> <a href="https://urbanists.social/tags/podcast" class="mention hashtag" rel="tag">#<span>podcast</span></a> episode - in conversation with the amazing Delegate Robbyn Lewis of <a href="https://urbanists.social/tags/Baltimore" class="mention hashtag" rel="tag">#<span>Baltimore</span></a> <a href="https://urbanists.social/tags/MD" class="mention hashtag" rel="tag">#<span>MD</span></a>, talking <a href="https://urbanists.social/tags/transit" class="mention hashtag" rel="tag">#<span>transit</span></a> as a lynchpin of <a href="https://urbanists.social/tags/democracy" class="mention hashtag" rel="tag">#<span>democracy</span></a> and <a href="https://urbanists.social/tags/socialjustice" class="mention hashtag" rel="tag">#<span>socialjustice</span></a>. Do not miss this one, friends. She&#39;s a leader to learn from. <a href="https://urbanists.social/tags/cities" class="mention hashtag" rel="tag">#<span>cities</span></a> <a href="https://urbanists.social/tags/politicalwill" class="mention hashtag" rel="tag">#<span>politicalwill</span></a></p>
Andrea Learned<p>Excited to share my LivingChangePodcast.com is soon to launch. Here&#39;s the trailer video in a LinkedIn post: <a href="https://www.linkedin.com/posts/andrealearned_livingchange-podcast-culvercity-activity-7022358031791263744-kc08?utm_source=share&amp;utm_medium=member_desktop" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://www.</span><span class="ellipsis">linkedin.com/posts/andrealearn</span><span class="invisible">ed_livingchange-podcast-culvercity-activity-7022358031791263744-kc08?utm_source=share&amp;utm_medium=member_desktop</span></a> <a href="https://urbanists.social/tags/LivingChange" class="mention hashtag" rel="tag">#<span>LivingChange</span></a> <a href="https://urbanists.social/tags/Climate" class="mention hashtag" rel="tag">#<span>Climate</span></a> <a href="https://urbanists.social/tags/Leadership" class="mention hashtag" rel="tag">#<span>Leadership</span></a> <a href="https://urbanists.social/tags/eBikes" class="mention hashtag" rel="tag">#<span>eBikes</span></a> <a href="https://urbanists.social/tags/Bikes4Climate" class="mention hashtag" rel="tag">#<span>Bikes4Climate</span></a> <a href="https://urbanists.social/tags/PoliticalWill" class="mention hashtag" rel="tag">#<span>PoliticalWill</span></a> <a href="https://urbanists.social/tags/KEXP" class="mention hashtag" rel="tag">#<span>KEXP</span></a></p>