NOTICE
OF EXEMPT SOLICITATION
NAME OF REGISTRANT: Microsoft Corporation
NAME OF PERSONS RELYING ON EXEMPTION: Arjuna
Capital, Open MIC
ADDRESS OF PERSON RELYING ON EXEMPTION: Arjuna
Capital,13 Elm St. Manchester, MA 01944; Open MIC, 1012 Torney Avenue, San Francisco, CA 94129-1755.
WRITTEN MATERIALS: The attached written materials are submitted
pursuant to Rule 14a-6(g)(1) (the “Rule”) promulgated under the Securities Exchange Act of 1934,* in connection with a proxy
proposal to be voted on at the Registrant’s 2024 Annual Meeting. *Submission is not required of this filer under the terms of the
Rule but is made voluntarily by the proponent in the interest of public disclosure and consideration of these important issues.
October 30, 2024
Dear Microsoft Corporation Shareholders,
We are writing to urge you to VOTE “FOR” PROPOSAL 8
on the proxy card, which asks Microsoft to report on risks associated with mis- and disinformation generated and disseminated via
Microsoft’s generative Artificial Intelligence (gAI), plans to mitigate these risks, and the effectiveness of such efforts. We believe
shareholders should vote “FOR” the Proposal for the following reasons:
| 1. | Misinformation and disinformation from gAI present serious economic and societal risks; the World Economic Forum ranks it as the top
global risk over the next two years. |
| 2. | Microsoft’s gAI is susceptible to generating mis- and disinformation. |
| 3. | Mis- and disinformation generated via Microsoft’s gAI present legal, regulatory, and reputational risks, including a potential
lack of legal protection from Section 230. |
| 4. | Microsoft’s current actions and reports do not address the concerns of this Proposal. |
Expanded Rationale FOR Proposal 8
The Proposal makes the following request:
RESOLVED: Shareholders request the Board issue a report, at
reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated
annually thereafter, assessing the risks to the Company’s operations and finances as well as risks to public welfare presented by
the company’s role in facilitating misinformation and disinformation disseminated or generated via artificial intelligence, and
what steps, if any, the company plans to remediate those harms, and the effectiveness of such efforts.
We believe shareholders should vote “FOR” the Proposal
for the following reasons:
1. Misinformation and disinformation from gAI present serious economic
and societal risks; the World Economic Forum ranks it as the top global risk over the next two years.
The World Economic Forum has ranked mis- and disinformation as the
top global risk over the next two years, and Eurasia Group has ranked ungoverned AI as the fourth greatest risk in
2024.1 Without the proper technological guardrails in place, these risks will only grow as bad actors become more sophisticated
in manipulation of gAI.
Background: After the initial public release of gAI from ChatGPT, top
AI experts issued strong warnings that gAI was not ready for public use and presented potential for harm. In March 2023, hundreds of experts
signed an open letter urging the world’s leading AI labs to pause the training of powerful AI systems for six months.2
These experts warned that gAI presents “profound risks to society and humanity” if not properly managed. In May 2023, more
than 500 prominent academics and industry leaders, including OpenAI CEO Sam Altman, signed a separate statement by the Center for AI Safety
declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such
as pandemics and nuclear war.”3
The tech industry, including Microsoft, did not heed these warnings
and over the last two years we have seen many of these experts’ ill-fated predictions come to pass. Notably, we have seen the proliferation
of mis- and disinformation due to gAI dropping “the cost of generating believable misinformation by several orders of magnitude.”4
The potential harms of this mis- and disinformation go beyond simple miscalculations or erroneous statements.
Over the last two years, mis- and disinformation disseminated and generated
via gAI have illustrated how gAI can:
| · | Impact financial markets: The Journal of Economics and Business found that negative misinformation has a significant,
negative short-term effect on stock returns.5 This potential market impact is especially concerning as gAI reduces the barriers
to producing believable misinformation. We saw this play out real-time in 2023, when an AI-generated image of an explosion at the Pentagon
caused a brief dip in the stock market.6 |
_____________________________
1 https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf;
https://www.eurasiagroup.net/live-post/risk-4-ungoverned-ai - :~:text=In%20a%20year%20when%20four%20billion%20people%20head,and%20sow%20political%20chaos%20on%20an%20unprecedented%20scale.
2 https://futureoflife.org/open-letter/pause-giant-ai-experiments/
3 https://www.safe.ai/work/statement-on-ai-risk
4 http://www.theinformation.com/articles/what-to-do-about-misinformation-in-the-upcoming-election-cycle
5 https://www.sciencedirect.com/science/article/pii/S0148619523000231
6 https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections;
https://www.business-humanrights.org/en/latest-news/us-image-produced-by-generative-ai-spreads-fear-over-fake-pentagon-explosion/
| · | Influence elections: Within the last year, gAI-manipulated videos, audio, and photos have been distributed to attempt to sway
election outcomes. Freedom House found that 16 countries have used gAI to sow doubt, smear opponents, or influence public debate online.7
They also found that authoritarian governments use gAI models to enhance online censorship by deploying machine learning to remove unfavorable
political, social, and religious speech.8 A separate study found that gAI chatbots gave incorrect election information 27%
of the time, including registration instructions, voting dates, and presidential candidate information.9 |
| · | Undermine overall trust in the information ecosystem and public institutions: The proliferation of gAI disinformation has sowed
significant mistrust in our information ecosystem, which could ultimately decrease trust in our public institutions. An Adobe study reported
that 84% of US respondents are worried that online content is vulnerable to manipulation and are concerned about election integrity.10
And 70% believe it is difficult to verify whether online information is trustworthy.11 The widespread use of gAI could erode
trust in accurate election information or public figures broadly. |
Not only are gAI risks a threat to society at-large, they
are a threat to Microsoft and its investors. Mis- and disinformation’s ability to manipulate public opinion, exacerbate biases,
weaken institutional trust, and sway elections undermines the stability of our democracy and economy. And when companies harm society
and the economy, the value of diversified portfolios can fall with GDP.12
While Microsoft may benefit in the short-term by rushing
gAI technologies to market, it does so at the potential expense of its long-term financial health and diversified shareholders’
portfolios. It is in the best interest of shareholders for Microsoft to effectively mitigate mis- and disinformation, in order to protect
the Company’s long-term financial health and ensure its investors do not internalize these costs.
| 2. | Microsoft’s gAI is susceptible to generating mis- and disinformation. |
It is clear that Microsoft is making big bets on gAI, as
the Company has invested over $13 billion on this technology.13 Prior to the initial release of its gAI search engine, ethicists
and Microsoft employees raised concerns about the technology’s readiness, including the possibility that it would “flood [social
media] with disinformation, degrade critical thinking and erode the factual foundation of modern society.”14
_____________________________
7 https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence
8 Ibid.
9 https://www.nbcnews.com/tech/tech-news/ai-chatbots-got-questions-2024-election-wrong-27-time-study-finds-rcna155640
10 https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards
11 https://www.cnet.com/tech/services-and-software/generative-ai-muddies-the-election-2024-outlook-and-voters-are-worried/
12 See
Universal Ownership: Why Environmental Externalities Matter to Institutional Investors, Appendix IV (demonstrating linear relationship
between GDP and a diversified portfolio) available at https://www.unepfi.org/fileadmin/documents/universal_ownership_full.pdf;
cf. https://www.advisorperspectives.com/dshort/updates/2020/11/05/market-cap-to-gdp-an-updated-look-at-the-buffett-valuation-indicator
(total market capitalization to GDP “is probably the best single measure of where valuations stand
at any given moment”) (quoting Warren Buffet).
13 https://www.cnbc.com/2023/04/08/microsofts-complex-bet-on-openai-brings-potential-and-uncertainty.html?msockid=357d576b6d1461d208e5465b6914679a
14 https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html
Yet, Microsoft has continued to move forward, recently committing
to invest another $30 billion in gAI alongside BlackRock.15 Despite these large investments, Microsoft still struggles to control
mis- and disinformation generated and disseminated via its gAI:
| · | Copilot repeated misinformation that there would be a 1- to 2-minute delay in June’s CNN election debate broadcast, contributing
to conspiracies that the candidates’ election performance could be edited by the network.16 |
| · | Microsoft’s Image Creator generated false photos of individuals tampering with US ballots and ballot boxes that could be used
to foster election mistrust.17 |
| · | One study found that Copilot only offered scientifically accurate answers to medical questions 54% of the time.18 |
| · | Microsoft’s MyCity Chatbot provided misleading information to NYC entrepreneurs that suggested they could exploit workers’
tips and discriminate based on income sources.19 |
| · | Copilot referenced debunked research promoting the idea that white people are genetically superior to nonwhite people when queried
about IQ scores in various countries.20 |
As illustrated, Microsoft’s gAI remains a work in
progress. Given the severity of the risks, the Company must do more to control these vulnerabilities.
| 3. | Mis- and disinformation generated via Microsoft’s gAI present legal, regulatory, and reputational risks, including a potential
lack of legal protection from Section 230. |
Microsoft faces legal, regulatory, and reputational risks
in the short-term that may financially impact the Company.
| · | Legal Risk: Legal experts are questioning who will ultimately be held responsible for mis- and disinformation generated by
gAI. Because the Company's gAI technology can be the origin of mis- and disinformation, there is no guarantee that Section 230 will continue
to protect companies from litigation as a result of misinformation and disinformation on their platforms. This may make Microsoft vulnerable
to future legal scrutiny. |
_____________________________
15 https://www.techradar.com/pro/microsoft-and-blackrock-team-up-on-30-billion-ai-investment
16 https://www.nbcnews.com/tech/internet/openai-chatgpt-microsoft-copilot-fallse-claim-presidential-debate-rcna159353
17 https://www.bbc.com/news/world-us-canada-68471253?utm_source=syndication
18 https://www.msn.com/en-us/news/technology/42-of-ai-answers-were-considered-to-lead-to-moderate-or-mild-harm-and-22-to-death-or-severe-harm-a-damning-research-
paper-suggests-that-bing-microsoft-copilot-ai-medical-advice-may-actually-kill-you/ar-AA1saa97?ocid=BingNewsVerp
19 https://www.analyticsinsight.net/artificial-intelligence/the-most-controversial-ai-decisions-in-recent-years
20 https://arstechnica.com/ai/2024/10/google-microsoft-and-perplexity-promote-scientific-racism-in-ai-search-results/
| · | Regulatory Risk: Tech companies have faced an evolving regulatory landscape within the last year, with the EU AI Act leading
the way in providing the first comprehensive set of rules for establishing guardrails for AI.21 The Algorithmic Accountability
Act (AAA) has been introduced to the US Senate and would obligate the Federal Trade Commission to require entities to perform impact assessments
of their AI systems.22 At the state-level, 19 states have passed laws to address deepfakes and other concerns of gAI.23
Microsoft must consider and prepare for additional, imminent AI reporting requirements. Furthermore, by fulfilling the requests of this
Proposal and setting a robust example of best practice, Microsoft has an opportunity to constructively shape AI reporting frameworks. |
| · | Reputational Risk: While Microsoft has invested significantly in its gAI, the technology’s lack of reliability and consistency
has sowed significant mistrust amongst users.24 The Associated Press-NORC Center for Public Affairs Research and USAFacts found
that two-thirds of U.S. adults are not at all confident in the reliability of generative AI. Only 12% believe that AI-search engines,
including Copilot, produce results that are always or often based on facts.25 Microsoft must bridge this AI trust gap to ensure
its technology is adopted by users. Negative press around mis- and disinformation from Microsoft’s technology has the potential
to sow user distrust, slow adoption, and negatively impact stock value. We have already seen how negative press around gAI misinformation
can impact a company’s stock performance.26 Microsoft must do what it can to mitigate reputational risks that sow mistrust
in the company’s technology. |
| 4. | Microsoft’s current actions and reports do not address the concerns of this Proposal. |
In its opposition statement, Microsoft lists several reports
and practices on responsible AI to obfuscate the need to fulfill this Proposal’s request. Yet, this Proposal is asking Microsoft
for a report that goes beyond current and planned reporting. Current reporting, including the EU and Australian Codes of Practice
on Disinformation and Misinformation, Responsible AI Standard and Principles, and its Responsible AI Transparency Report simply outline
the Company’s commitments to ethical AI standards and frameworks. While the inaugural Responsible AI Transparency Report provides
a deeper explanation of Microsoft’s approach to responsible AI, we are seeking reporting that goes beyond this and assesses
how Microsoft’s principles and frameworks are actually working. The requested report would provide a comprehensive assessment
of the risks associated with gAI so that the company can effectively mitigate these risks, and an evaluation of how effectively
the Company tackles the risks identified.
_____________________________
21 https://www.techmonitor.ai/digital-economy/ai-and-automation/what-is-the-eu-ai-act?cf-view
22 https://www.jdsupra.com/legalnews/the-current-and-evolving-landscape-of-4080755/
23 https://www.esginvestor.net/keeping-up-with-ai/
24 https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/10/consumers-are-voicing-concerns-about-ai
- foot2
25 https://apnews.com/article/ai-chatbots-misinformation-voting-election-2024-7131822c0f2ebe843b4c7e3cb111a3d3
26 https://www.npr.org/2023/02/09/1155650909/google-chatbot--error-bard-shares
As the risks of gAI are severe and broadly consequential,
it’s crucial Microsoft transparently illustrates to shareholders that it has fully identified the risks and is evaluating its ability
to address them effectively. We believe these steps will help to mitigate both short- and long-term Company risks and provide an appropriate
accountability mechanism.
Conclusion
For the reasons provided above, we strongly urge you to support the
Proposal. We believe a report on misinformation and disinformation risks, mitigation, and assessment related to generative AI will help
ensure Microsoft is acting in the long-term best interest of shareholders.
Please contact Natasha Lamb at natasha@arjuna-capital.com or
Michael Connor at mconnor@openmic.org for additional information.
Sincerely,
Natasha Lamb, Arjuna Capital
Michael Connor, Executive Director,
Open MIC
This is not a solicitation of authority to vote your proxy. Please
DO NOT send us your proxy card. Arjuna Capital and Open Mic are not able to vote your proxies, nor does this communication contemplate
such an event. The proponent urges shareholders to vote for Proxy Item 8 following the instruction provided on the management’s
proxy mailing.
The views expressed are those of the authors and Arjuna Capital
and Open MIC as of the date referenced and are subject to change at any time based on market or other conditions. These views are not
intended to be a forecast of future events or a guarantee of future results. These views may not be relied upon as investment advice.
The information provided in this material should not be considered a recommendation to buy or sell any of the securities mentioned. It
should not be assumed that investments in such securities have been or will be profitable. This piece is for informational purposes and
should not be construed as a research report.
Microsoft (NASDAQ:MSFT)
Historical Stock Chart
From Oct 2024 to Nov 2024
Microsoft (NASDAQ:MSFT)
Historical Stock Chart
From Nov 2023 to Nov 2024