(safe) Unsecure security

https://www.yubico.com/support/security-advisories/ysa-2024-03/

>A vulnerability was discovered in Infineon’s cryptographic library, which is utilized in YubiKey 5 Series, and Security Key Series with firmware prior to 5.7.0 and YubiHSM 2 with firmware prior to 2.4.0. The severity of the issue in Yubico devices is moderate. >An attacker could exploit this issue as part of a sophisticated and targeted attack to recover affected private keys. The attacker would need physical possession of the YubiKey, Security Key, or YubiHSM, knowledge of the accounts they want to target, and specialized equipment to perform the necessary attack. Depending on the use case, the attacker may also require additional knowledge including username, PIN, account password, or authentication key.

5
0
https://www.washingtonpost.com/technology/2024/08/27/chinese-government-hackers-penetrate-us-internet-providers-spy/

https://ghostarchive.org/archive/JS9X1 Chinese government hackers penetrate U.S. internet providers to spy Beijing’s hacking effort has “dramatically stepped up from where it used to be,” says former top U.S cybersecurity official.

2
0
blog.cryptographyengineering.com

A reminder ## Highlights >Many systems use encryption of one sort or another. However, when we talk about encryption in the context of modern private messaging services, it typically has a very specific meaning: the use of default [end-to-end](https://en.wikipedia.org/wiki/End-to-end_encryption) encryption to protect message content. When used in an industry-standard way, this feature ensures that all conversations are encrypted by default — under encryption keys that are only known to the communication participants, and not to the service provider. >Telegram clearly fails to meet this stronger definition, because it does not encrypt conversations by default. If you want to use end-to-end encryption in Telegram, you must manually activate an [optional end-to-end encryption feature](https://core.telegram.org/blackberry/secretchats) called “Secret Chats” for each private conversation you want to have. To reiterate, this feature is explicitly not turned on for the vast majority of conversations, and is only available for one-on-one conversations, and never for group chats with more than two people in them. >Even though end-to-end encryption is one of the best tools we’ve developed to prevent data compromise, it is hardly the end of the story. One of the biggest privacy problems in messaging is the availability of loads of meta-data — essentially data about who uses the service, who they talk to, and when they do that talking.

11
3
thequantuminsider.com

>Federal agencies must start migrating to post-quantum cryptography (PQC) now due to the “record-now, decrypt-later” threat, which anticipates quantum computers decrypting captured data in the future.

6
0
seclists.org

Sometimes obvious things are obvious only looking back

2
0
www.wired.com

Got some time to read the article: I am sure, that India is not an exception in leaking and being in deep shit in regards to storing sensitive data. Seems that we should assume that we cannot prevent data leaks. So the question is - how can we deal with the aftermath? # A Leak of Biometric Police Data Is a Sign of Things to Come ## Highlights >Thousands of law enforcement officials and people applying to be police officers in India have had their personal information leaked online—including fingerprints, facial scan images, signatures, and details of tattoos and scars on their bodies. >While the misconfigured server has now been closed off, the incident highlights the risks of companies collecting and storing biometric data, such as fingerprints and facial images, and how they could be misused if the data is accidentally leaked. >“A lot of data is collected in India, but nobody's really bothered about how to store it properly,” Narayan says. Data breaches are happening so regularly that people have “lost that surprise shock factor,” >So many other countries are looking at biometric verification for identities, and all of that information has to be stored somewhere,” Fowler says. “If you farm it out to a third-party company, or a private company, you lose control of that data. When a data breach happens, you’re in deep shit, for lack of a better term.

7
0
news.risky.biz

# When Regulation Encourages ISPs to Hack Their Customers ## Highlights >KT, formerly Korea Telecom, has been [accused of deliberately infecting](https://news.risky.biz/r/19c7fe63?m=54e35b2b-566a-4a9e-906c-d30274fa8b35) 600,000 of its own customers with malware to reduce peer-to-peer file sharing traffic. This is a bizarre hack and a great case study of how government regulation has distorted the South Korean internet. >South Korean media outlet [JTBC reported](https://news.risky.biz/r/71df521b?m=54e35b2b-566a-4a9e-906c-d30274fa8b35) last month that KT had infected customers who were using Korean cloud data storage services known as 'webhards' (web hard drives). The malware disabled the webhard software, resulted in files disappearing and sometimes caused computers to crash. >[JTBC news says](https://news.risky.biz/r/e9366696?m=54e35b2b-566a-4a9e-906c-d30274fa8b35) the team involved "consisted of a 'malware development' section, a 'distribution and operation' section, and a 'wiretapping' section that looked at data sent and received by KT users in real time". >The company‬ ‭claims that the people involved in the webhard hack were a small group operating independently. It's just an amazing coincidence that they just happened to invest so much time and effort into a caper that aligned so well with KT's financial interests!‬‭ >South Korea has a 'sender pays' model in which [ISPs must pay](https://news.risky.biz/r/0710710f?m=54e35b2b-566a-4a9e-906c-d30274fa8b35) for traffic they send to other ISPs, breaking the worldwide norm of ['settlement-free peering'](https://news.risky.biz/r/538e2ade?m=54e35b2b-566a-4a9e-906c-d30274fa8b35), voluntary arrangements whereby ISPs exchange traffic without cost. >Once the sender pays rules were enforced, however, KT was left with large bills from its peer ISPs for the Facebook traffic sent from the cache in its network. KT tried to recoup costs from Facebook, but negotiations broke down and Facebook disabled the cache. South Korean users were instead routed over relatively expensive links to overseas caches with increased latency. >These sender pays rules may also encourage peer-to-peer file sharing relative to more centralised pirate content operations. >An unnamed sales manager from a webhard company [told](https://news.risky.biz/r/bf3b5d9a?m=54e35b2b-566a-4a9e-906c-d30274fa8b35) *TorrentFreak* torrent transfers saved them significant bandwidth costs, but as long as traffic flows between ISPs, someone will pay. KT is South Korea's [largest broadband provider](https://news.risky.biz/r/32db743b?m=54e35b2b-566a-4a9e-906c-d30274fa8b35), so since it has more customers, peer-to-peer file sharing means that the company has to pay fees to its competitor ISPs. >Either way, this is just a great example of where unusual regulation can produce unusual results. fun

3
0
https://pluralistic.net/2024/06/28/dealer-management-software/

# Pluralistic: The reason you can't buy a car is the same reason that your health insurer let hackers dox you (28 Jun 2024) ## Metadata - Author: Cory Doctorow - Category: rss - URL: https://pluralistic.net/2024/06/28/dealer-management-software/ ## Highlights >Equifax knew the breach was coming. It wasn't just that their top execs liquidated their stock in Equifax before the announcement of the breach – it was also that they ignored *years* of increasingly urgent warnings from IT staff about the problems with their server security. >Just like with Equifax, the 737 Max disasters tipped Boeing into a string of increasingly grim catastrophes. >Equifax isn't just a company: it's *infrastructure*. >This witch-hunts-as-a-service morphed into an official part of the economy, the backbone of the credit industry, with a license to secretly destroy your life with haphazardly assembled "facts" about your life that you had the most minimal, grudging right to appeal (or even see). >There's a direct line from that acquisition spree to the Equifax breach(es). First of all, companies like Equifax were early adopters of technology. They're a database company, so they were the crash-test dummies for ever generation of database. >There's a reason libraries, cities, insurance companies, and other giant institutions keep getting breached: they started accumulating tech debt before anyone else, so they've got more asbestos in the walls, more sagging joists, more foundation cracks and more termites. >The *reason* to merge with your competitors is to create a monopoly position, and the value of a monopoly position is that it makes a company too big to fail, which makes it too big to jail, which makes it *too big to care*. >The biggest difference was that Boeing once had a useful, high-quality product, whereas Equifax *started off* as an irredeemably terrible, if efficient, discrimination machine, and *grew* to become an equally terrible, but also ferociously incompetent, enterprise. >Every corporate behemoth is locked in a race between the eventual discovery of its irreparable structural defects and its ability to become so enmeshed in our lives that *we* have to assume the costs of fixing those defects. It's a contest between "too rotten to stand" and "too big to care." >Remember *how* we discovered this? Change was hacked, went down, ransomed, and no one could fill a scrip in America for more than a week, until they paid the hackers $22m in Bitcoin? >Well, first Unitedhealthcare became the largest health insurer in America by buying all its competitors in a series of mergers that comatose antitrust regulators failed to block. Then it combined all those other companies' IT systems into a cosmic-scale dog's breakfast that barely ran. Then it bought Change and used its monopoly power to ensure that every Rx ran through Change's servers, which were part of that asbestos-filled, termite-infested, crack-foundationed, sag-joisted teardown. Then, it got hacked. >Good luck with that. There's a company you've never heard. It's called CDK Global. They provide "dealer management software." They are a monopolist. They got that way after being bought by a private equity fund called Brookfield. You can't complete a car purchase without their systems, and their systems have been hacked. >What happens next is a near-certainty: CDK will pay a multimillion dollar ransom, and the hackers will reward them by breaching the personal details of everyone who's ever bought a car, and the slaves in Cambodian pig-butchering compounds will get a fresh supply of kompromat. >But on the plus side, the need to pay these huge ransoms is key to ensuring liquidity in the cryptocurrency markets, because ransoms are now the only nondiscretionary liability that can only be settled in crypto ;)

6
0
cyberscoop.com

Went well with this [this](https://www.youtube.com/watch?v=PYbV-AfdnGs) # How AI Will Change Democracy ## Metadata - Author: Bruce Schneier - Category: rss - URL: https://www.schneier.com/blog/archives/2024/05/how-ai-will-change-democracy.html ## Highlights >Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes. >In particular, there are potential changes over four dimensions: Speed, scale, scope and sophistication. >It gets interesting when changes in degree can become changes in kind. High-speed trading is fundamentally different than regular human trading. AIs have invented fundamentally new strategies in the game of Go. Millions of AI-controlled social media accounts could fundamentally change the nature of propaganda. >We don’t know how far AI will go in replicating or replacing human cognitive functions. Or how soon that will happen. In constrained environments it can be easy. AIs already play chess and Go better than humans. >keep in mind a few questions: Will the change distribute or consolidate power? Will it make people more or less personally involved in democracy? What needs to happen before people will trust AI in this context? What could go wrong if a bad actor subverted the AI in this context? >The changes are largely in scale. AIs can engage with voters, conduct polls and fundraise at a scale that humans cannot—for all sizes of elections. They can also assist in lobbying strategies. AIs could also potentially develop more sophisticated campaign and political strategies than humans can. >But as AI starts to look and feel more human, our human politicians will start to look and feel more like AI. I think we will be OK with it, because it’s a path we’ve been walking down for a long time. Any major politician today is just the public face of a complex socio-technical system. When the president makes a speech, we all know that they didn’t write it. >In the future, we’ll accept that almost all communications from our leaders will be written by AI. We’ll accept that they use AI tools for making political and policy decisions. And for planning their campaigns. And for everything else they do. >AIs can also write laws. In November 2023, Porto Alegre, Brazil became the first city to enact a law that was [entirely written](https://apnews.com/article/brazil-artificial-intelligence-porto-alegre-5afd1240afe7b6ac202bb0bbc45e08d4) by AI. It had to do with water meters. One of the councilmen prompted ChatGPT, and it produced a complete bill. He submitted it to the legislature without telling anyone who wrote it. And the humans passed it without any changes. >A law is just a piece of generated text that a government agrees to adopt. >AI will be good at finding legal loopholes—or at creating legal loopholes. I wrote about this in my latest book, [*A Hacker’s Mind*](https://cyberscoop.com/bruce-schneier-gets-inside-the-hackers-mind/). Finding loopholes is similar to finding vulnerabilities in software. >AIs will be good at inserting micro-legislation into larger bills. >AI can help figure out unintended consequences of a policy change—by simulating how the change interacts with all the other laws and with human behavior. >AI can also write more complex law than humans can. >AI can write laws that are impossible for humans to understand. >Imagine that we train an AI on lots of street camera footage to recognize reckless driving and that it gets better than humans at identifying the sort of behavior that tends to result in accidents. And because it has real-time access to cameras everywhere, it can spot it … everywhere. >The AI won’t be able to explain its criteria: It would be a black-box neural net. But we could pass a law defining reckless driving by what that AI says. It would be a law that no human could ever understand. This could happen in all sorts of areas where judgment is part of defining what is illegal. We could delegate many things to the AI because of speed and scale. Market manipulation. Medical malpractice. False advertising. I don’t know if humans will accept this. >It could audit contracts. It could operate at scale, auditing *all* human-negotiated government contracts. >Imagine we are using an AI to aid in some international trade negotiation and it suggests a complex strategy that is beyond human understanding. Will we blindly follow the AI? Will we be more willing to do so once we have some history with its accuracy? >Could AI come up with better institutional designs than we have today? And would we implement them? >An AI public defender is going to be a lot better than an overworked not very good human public defender. But if we assume that human-plus-AI beats AI-only, then the rich get the combination, and the poor are stuck with just the AI. >AI will also change the meaning of a lawsuit. Right now, suing someone acts as a strong social signal because of the cost. If the cost drops to free, that signal will be lost. And orders of magnitude more lawsuits will be filed, which will overwhelm the court system. >Another effect could be gutting the profession. Lawyering is based on apprenticeship. But if most of the apprentice slots are filled by AIs, where do newly minted attorneys go to get training? And then where do the top human lawyers come from? This might not happen. AI-assisted lawyers might result in more human lawyering. We don’t know yet. >AI can help enforce the law. In a sense, this is nothing new. Automated systems already act as law enforcement—think speed trap cameras and Breathalyzers. But AI can take this kind of thing much further, like automatically identifying people who cheat on tax returns, identifying fraud on government service applications and watching all of the traffic cameras and issuing citations. >But most importantly, AI changes our relationship with the law. Everyone commits driving violations all the time. If we had a system of automatic enforcement, the way we all drive would change—significantly. Not everyone wants this future. Lots of people don’t want to fund the IRS, even though catching tax cheats is incredibly profitable for the government. And there are legitimate concerns as to whether this would be applied equitably. >AI can help enforce regulations. We have no shortage of rules and regulations. What we have is a shortage of time, resources and willpower to enforce them, which means that lots of companies know that they can ignore regulations with impunity. >Imagine putting cameras in every slaughterhouse in the country looking for animal welfare violations or fielding an AI in every warehouse camera looking for labor violations. That could create an enormous shift in the balance of power between government and corporations—which means that it will be strongly resisted by corporate power. >The AI could provide the court with a reconstruction of the accident along with an assignment of fault. AI could do this in a lot of cases where there aren’t enough human experts to analyze the data—and would do it better, because it would have more experience. >Automated adjudication has the potential to offer everyone immediate justice. Maybe the AI does the first level of adjudication and humans handle appeals. Probably the first place we’ll see this is in contracts. Instead of the parties agreeing to binding arbitration to resolve disputes, they’ll agree to binding arbitration by AI. This would significantly decrease cost of arbitration. Which would probably significantly increase the number of disputes. >If you and I are business partners, and we have a disagreement, we can get a ruling in minutes. And we can do it as many times as we want—multiple times a day, even. Will we lose the ability to disagree and then resolve our disagreements on our own? Or will this make it easier for us to be in a partnership and trust each other? >Human moderators are still better, but we don’t have enough human moderators. And AI will improve over time. AI can moderate at scale, giving the capability to every decision-making group—or chatroom—or local government meeting. >AI can act as a government watchdog. Right now, much local government effectively happens in secret because there are no local journalists covering public meetings. AI can change that, providing summaries and flagging changes in position. >This would help people get the services they deserve, especially disadvantaged people who have difficulty navigating these systems. Again, this is a task that we don’t have enough qualified humans to perform. It sounds good, but not everyone wants this. Administrative burdens can be deliberate. >Finally, AI can eliminate the need for politicians. This one is further out there, but bear with me. Already there is [research](https://www.nytimes.com/2016/08/24/us/politics/facebook-ads-politics.html) showing AI can extrapolate our political preferences. An AI personal assistant trained on and continuously attuned to your political preferences could advise you, including what to support and who to vote for. It could possibly even vote on your behalf or, more interestingly, act as your personal representative. >We can imagine a personal AI directly participating in policy debates on our behalf along with millions of other personal AIs and coming to a consensus on policy. >More near term, AIs can result in more ballot initiatives. Instead of five or six, there might be five or six hundred, as long as the AI can reliably advise people on how to vote. It’s hard to know whether this is a good thing. I don’t think we want people to become politically passive because the AI is taking care of it. But it could result in more legislation that the majority actually wants. >I think this is all coming. The time frame is hazy, but the technology is moving in these directions. >All of these applications need security of one form or another. Can we provide confidentiality, integrity and availability where it is needed? AIs are just computers. As such, they have all the security problems regular computers have—plus the new security risks stemming from AI and the way it is trained, deployed and used. Like everything else in security, it depends on the details. >In most cases, the owners of the AIs aren’t the users of the AI. As happened with search engines and social media, surveillance and advertising are likely to become the AI’s business model. And in some cases, what the user of the AI wants is at odds with what society wants. >We need to understand the rate of AI mistakes versus the rate of human mistakes—and also realize that AI mistakes are viewed differently than human mistakes. There are also different types of mistakes: false positives versus false negatives. But also, AI systems can make different kinds of mistakes than humans do—and that’s important. In every case, the systems need to be able to correct mistakes, especially in the context of democracy. >Many of the applications are in adversarial environments. If two countries are using AI to assist in trade negotiations, they are both going to try to hack each other’s AIs. This will include attacks against the AI models but also conventional attacks against the computers and networks that are running the AIs. They’re going to want to subvert, eavesdrop on or disrupt the other’s AI. >Large language models work best when they have access to everything, in order to train. That goes against traditional classification rules about compartmentalization. >Can we build systems that reduce power imbalances rather than increase them? Think of the privacy versus surveillance debate in the context of AI. >And similarly, equity matters. Human agency matters. >Whether or not to trust an AI is less about the AI and more about the application. Some of these AI applications are individual. Some of these applications are societal. Whether something like “fairness” matters depends on this. And there are many competing definitions of fairness that depend on the details of the system and the application. It’s the same with transparency. The need for it depends on the application and the incentives. Democratic applications are likely to require more transparency than corporate ones and probably AI models that are not owned and run by global tech monopolies. >AI will be one of humanity’s most important inventions. That’s probably true. What we don’t know is if this is the moment we are inventing it. Or if today’s systems are yet more over-hyped technologies. But these are security conversations we are going to need to have eventually. >AI is coming for democracy. Whether the changes are a net positive or negative depends on us. Let’s help tilt things to the positive. Yea or Nay?

2
1
https://www.flyingpenguin.com/?p=49767

cross-posted from: https://group.lt/post/1977692 > Some appetizers for the book on breaking Enigma.

7
0
www.eff.org

Selection of quotes: >This is despite the fact that it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. >The law limits liability to intermediaries—entities that “provide services to distribute, maintain, or update” TikTok by means of a marketplace, or that provide internet hosting services to enable the app’s distribution, maintenance, or updating. The law also makes intermediaries responsible for its implementation. >The law explicitly denies to the Attorney General the authority to enforce it against an individual user of a foreign adversary controlled application, so users themselves cannot be held liable for continuing to use the application, if they can access it. >Enacting this legislation has undermined this long standing, democratic principle. It has also undermined the U.S. government’s moral authority to call out other nations for when they shut down internet access or ban social media apps and other online communications tools. >Our lawmakers should work to protect data privacy, but this was the wrong approach. They should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries. **Thoughts?**

16
19
blog.cryptographyengineering.com

>But there is a saying in our field that attacks only get better.

4
0
seclists.org

>ECDSA NIST-P521 keys used with any vulnerable product / component should be considered compromised and consequently revoked by removing them from authorized_keys, GitHub, ...

1
0
https://www.bleepingcomputer.com/news/security/intel-and-lenovo-servers-impacted-by-6-year-old-bmc-flaw/

>Although the vulnerability was addressed in August 2018, the maintainers of Lighthttpd patched it silently in version 1.4.51 without assigning a tracking ID (CVE). >This led the developers of AMI MegaRAC BMC to miss the fix and fail to integrate it into the product. The vulnerability thus trickled down the supply chain to system vendors and their customers. >BMCs are microcontrollers embedded on server-grade motherboards, including systems used in data centers and cloud environments, that enable remote management, rebooting, monitoring, and firmware updating on the device. In short - it is a BIOS/virtual keyboard and mouse accessible via internet and if you can access it - you are controlling the computer. Of course, to have such devices exposed without adequate protection is an interesting idea by itself, but there are quite some dedicated server providers that do it for various reasons (less work).

12
0
securityaffairs.com

This is quite important, but still there is hope - to be fully exploited it seems that one needs to have malware present in the computer, so if that is already the case - there are more problems to solve.

4
0
www.404media.co

>The little known “manufacturer” or “manager” reset codes could let third parties—such as spies or criminals—bypass locks without the owner’s consent and are sometimes not disclosed to customers. >The fact the DoD protected its own interests while not warning the public gives a stark demonstration of what could happen if a backdoor was inserted into a consumer electronics device or similar. >The documentation also explicitly says that sometimes the existence of a manager code may not be sent to an actual user of the device. “In some instances the Manager Code and associated Operating Instructions are not issued to the End User,” it reads, meaning that people may be using these locks without understanding that they can include a backdoor code.

14
1
www.theregister.com

>"Khurana was handsomely compensated," Meta continued in its complaint. "But ... that was not enough." Despite that fat pay package and VP title, Khurana may have failed to consider the level of monitoring or logging that goes on inside Meta's networks, if the lawsuit's allegations are correct.

11
1