Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

His testimony comes after Microsoft admitted that it could have taken steps to prevent two aggressive nation-state cyberattacks from China and Russia.

According to Microsoft whistleblower Andrew Harris, Microsoft spent years ignoring a vulnerability while he proposed fixes to the “security nightmare.” Instead, Microsoft feared it might lose its government contract by warning about the bug and allegedly downplayed the problem, choosing profits over security, ProPublica reported.

This apparent negligence led to one of the largest cyberattacks in US history, and officials’ sensitive data was compromised due to Microsoft’s security failures. The China-linked hackers stole 60,000 US State Department emails, Reuters reported. And several federal agencies were hit, giving attackers access to sensitive government information, including data from the National Nuclear Security Administration and the National Institutes of Health, ProPublica reported. Even Microsoft itself was breached, with a Russian group accessing senior staff emails this year, including their “correspondence with government officials,” Reuters reported.

You are viewing a single thread.
View all comments View context
81 points

you can have a propietary os thats secure, but the problem is once you get to the point where youre selling data and allow anything to be installed of course, its no longer secure.

permalink
report
parent
reply
19 points
*

You can’t verify it’s secure if it’s proprietary, so it’s never secure? Having control over other people’s computing creates bad incentives to gain at your user’s expense, so it’s day 1 you should lose trust.

permalink
report
parent
reply
11 points
*

id argue arguing the unknown can’t be used to say if its technically secure, nor insecure. If that kind of coding is brought into place, then say any OS using non open source hardware is insecure because the VHDL/Verilog code is not verifiable.

Unless everyone running an open source version of RISC-V code or a FPGA for their hardware, its a game of goalposts on where someone puts said flag.

permalink
report
parent
reply
1 point
*

Consider people counting paper votes in an election. Multiple political parties are motivated by their own self interests to watch the counting to prevent each other faking votes. That is a security feature and without it then the validity of the election has a critical unknown making it very sussy.

An OS using proprietary software is like as an electronic voting machine, we pretend it’s secure to feel better about a failing we can’t change.

permalink
report
parent
reply
1 point

Security is in degrees. The highest level would indeed use open-source hardware. I hope to build a rig like that someday.

permalink
report
parent
reply
42 points

You can have audits done on proprietary software. Just because the public can’t see it doesn’t mean nobody else can.

permalink
report
parent
reply
3 points

That just moves requiring trust from the 1st party to 2nd or 3rd party. Unreasonable trust.

permalink
report
parent
reply
3 points

That’s the crux of it here. Microsoft wanted to get into the data game they saw Facebook and Google reaping. However, Microsoft still charge you for the software they use to harvest your data.

permalink
report
parent
reply
13 points

Sure its secure, but is it verifiably secure?

permalink
report
parent
reply
7 points

I mean you can provide audit findings and results and it’s a pretty big part of vendor management and due diligence but at some point you have to accept risk in using open source software that can be susceptible to supply chain hacks, might be poorly maintained, etc or accept the risk of taking the closed source company’s documentation at face value (and that can also be poorly maintained and susceptible to supply chain attacks)

There’s got to be some level of risk tolerance to do business and open source doesn’t actually reduce risk. But it can at least reduce enshittification

permalink
report
parent
reply
7 points

It’s pretty hilarious when people act like being open source means it’s “more secure”. It can be, but it’s absolutely not guaranteed. The xz debacle comes to mind.

There are tons of bugs in open source software. Linux has had its fair share.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 12K

    Monthly active users

  • 13K

    Posts

  • 577K

    Comments