Cyber trends for 2023

Nothing is more difficult than making predictions, especially in fast advancing cyber security field. Instead of me trowing out wild ideas what might be coming, I have collected here some trends many people and publications have predicted for 2023.

HTTPS: These days HTTPS has effectively become the default transport for web browsing. Most notably, the Chrome browser now marks any older HTTP website as “Not Secure” in the address bar. Chrome to attempt to “upgrade” to the HTTPS version of websites, if you ever accidentally navigate to the insecure version. If a secure version isn’t available, an on-screen warning is shown, asking if you would like to continue. As HTTPS has become more common across the web, Google Chrome is preparing to launch a security option that will block “insecure” downloads through HTTP on Chrome browser.

Malwertising: Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users also in 2023. The FBI is warning US consumers that cybercriminals are placing ads in search engine results that impersonate well-known brands, in an attempt to spread ransomware and steal financial information. Cybercriminals are purchasing ads that show up at the very top of search engine results, often purporting to link to a legitimate company’s website. However, anyone clicking on the link is instead taken to a lookalike page that may appear identical, but is in fact designed to phish for login credentials and financial details, or even trick the unwary into downloading ransomware. The FBI has advised consumers to use ad blockers to protect themselves from such threats.

Encrypted malware: The vast majority of malware arriving over encrypted connections that are typically HTTPS web sessions. The vast majority of cyber-attacks over the past year have used TLS/SSL encryption to hide from security teams traditional firewalls and many other security tools. Over 85% of Attacks Hide in Encrypted Channels. WatchGuard Threat Lab Report Finds Top Threat Arriving Exclusively Over Encrypted Connections. If you are not inspecting encrypted traffic when it enters your network, you will not be able to detect most malware at network level. Hopefully, you at least have endpoint protection implemented for a chance to catch it further down the cyber kill chain.

Software vulnerabilities: Weak configurations for encryption and missing security headers will be still very common in 2023. In 2022 nearly every application has at least one vulnerability or misconfiguration that affects security and a quarter of application tests found a highly or critically severe vulnerability. Read more at Misconfigurations, Vulnerabilities Found in 95% of Applications

Old vulnerabilities: You will see attackers try to use old vulnerabilities again in 2023 because they work. Attackers will take the path of least resistance, and as long as vendors don’t consistently perform thorough root-cause analysis when fixing security vulnerabilities, it will continue to be worth investing time in trying to revive known vulnerabilities before looking for novel ones. There are many companies that do not patch their systems at reasonable time or at all, so they stay vulnerable. Also new variations of old vulnerabilities are also developed: approximately 50% of the observed 0-days in the first half of 2022 were variants of previously patched vulnerabilities.

Security gaps: There are still big gaps in companies’ cyber security. The rapid advancement of technology in all industries has led to the threat of ever-increasing cyberattacks that target businesses, governments, and individuals alike. Lack of knowledge, maintenance of employees’ skills and indifference are the strongest obstacles in the development of many companies’ cyber security. While security screening and limiting who has access to your data are both important aspects of personnel security, they will only get you so far.

Cloud: In a hyperscale cloud provider, there can be potentially several thousand people, working around the globe that could potentially access our data. Security screening and limiting alone still leaves a significant risk of malicious or accidental access to data. Instead, you should expect your cloud provider to take a more layered approach.

MFA: MFA Fatigue attacks are putting your organization at risk in 2023. Multi-factor auth fatigue is real. A common threat targeting businesses is MFA fatigue attacks a technique where a cybercriminal attempts to gain access to a corporate network by bombarding a user with MFA prompts until they finally accept one. This attempt can be successful, especially when the target victim is distracted or overwhelmed by the notifications or misinterprets them with legitimate authentication requests. t’s a huge threat because it bypasses one of the most effective the security measures.

Passwords: Passwords will not go away completely even though new solutions to replace then will be pushed to users. When you create passwords or passphrases, make them good and long enough to be secure. Including a comma character to the password can make it harder for cyber criminals to use if for some reason it leaks out. The reason us that comma in password can obfuscate tabular comma separated values (csv) files, which are a common way to collect and distribute stolen passwords.

EU: The Network and Information Security (NIS) Directive was the first piece of EU-wide legislation on cybersecurity: Network and Information Security 2 also known as NIS2. Rules requiring EU countries to meet stricter supervisory and enforcement measures and harmonise their sanctions were approved by MEPs on late 2022. They will start to affect security decisions in 2023. The new rules will set tighter cybersecurity obligations for risk management, reporting obligations and information sharing. The requirements cover incident response, supply chain security, encryption and vulnerability disclosure, among other provisions. The new rules will also protect so-called “important sectors” such as postal services, waste management, chemicals, food, manufacturing of medical devices, electronics, machinery, motor vehicles and digital providers. All medium-sized and large companies in selected sectors would fall under the legislation. The NIS Directive has impacted the cybersecurity budget of operators over the past year with deep-dives into the Energy and Health sectors. Cybersecurity Investments in the EU: Is the Money Enough to Meet the New Cybersecurity Standards?

USA: CISA has released cross-sector cybersecurity performance goals (CPGs) in response to President Biden’s 2021 National Security Memorandum on improving cybersecurity for critical infrastructure control systems. Since then, the CPGs have been observed by the cybersecurity community as “the floor” and “a baseline” to cybersecurity hygiene and practices. Many organizations overlook OT as part of their cybersecurity strategy, remaining their focus solely to IT systems. Especially in the critical infrastructure sectors, overlooking OT can have serious risks to all operations. As a result, the CPGs released explicitly are scoped to include OT devices.

Android: Android security will advance in 2023 in many ways. Android is adding support for updatable root certificates in the next Android 14 release. Google Play now lets children send purchase requests to guardians.

Loosing the trust: The world’s biggest tech companies have lost confidence in one of the Internet’s behind-the-scenes gatekeepers. Microsoft, Mozilla, and Google are dropping TrustCor Systems as a root certificate authority in their products.

Need for better communication: At a time when less than a fifth (18%) of risk and compliance professionals profess to be very confident in their ability to clearly communicate risk to the board, it’s clear that lines of communication—not to mention understanding—must be improved.

Supply chain risks: Watch for geopolitical instability to continue to be a governance issue, particularly with the need to oversee third-party and supply chain risk.

Governance: For boards and management, heightened pressure around climate action dovetails with the SEC’s proposed rules about cybersecurity oversight, which may soon become law. When they do, companies will need to prepare for more disclosures about their cybersecurity policies and procedures. With fresh scrutiny on directors’ cybersecurity expertise, or lack thereof, boards will need to take their cyber savviness to the next level as well.

Privacy and data protection:Privacy and data protection are the big story for compliance officers in 2023, with expanding regulations soon expected to cover five billion citizens.

Auditing: Audit’s role in corporate governance and risk management has been evolving. Once strictly focused on finance and compliance, internal audit teams are now increasingly expected to help boards and executive management identify, prioritize, manage and mitigate interconnected risks across the organization.

Business risks: In 2023, business risks will run the gamut: geopolitical volatility, talent management, DEI (Diversity, Equity, and Inclusion), ESG (Environmental, Social, and Governance), IT security amid continued remote and hybrid work, and business continuity amid the threat of large-scale operational and utility interruptions. There is also a challenge that Executives take more cybersecurity risks than office workersleaders engage in more dangerous behavior and are four times more likely to be victims of phishing compared to office workers.

Integrated Risk Management: Look for risk to be increasingly viewed as a driver of business performance and value as digital landscapes and business models evolve. Forward-looking companies will embed integrated risk management (IRM) into their business strategy, so they can better understand the risks associated with new strategic initiatives and be able to pivot as necessary. Keep in mind that Executives take more cybersecurity risks than office workers

Zero trust: Many people think that Zero Trust is pretty optimal security practice in 2023. It is good for those new systems to whom it’s model suits, but Zero Trust has also challenges. Incorporating zero trust into an existing network can be very expensive. Zero Trust Shouldnt Be The New Normal article says that the zero trust model starts to erode when the resources of two corporations need to play together nicely. Federated activity, ranging from authentication to resource pooled cloud federation, doesnt coexist well with zero trust. To usefully emulate the kind of informed trust model that humans use every day, we need to flip the entire concept of zero trust on its head. In order to do that, network interactions need to be evaluated in terms of risk. Thats where identity-first networking comes in. In order for a network request to be accepted, it needs both an identity and explicit authorization; System for Cross-domain Identity Management (SCIM) based synchronization is used to achieve this. This securely automates the exchange of a user identity between cloud applications, diverse networks, and service providers.

Poor software: There will be a lot of poor software in use in 2023 and it will cost lots of money. Poor software costs the US 2.4 trillion: cyberattacks due to existing vulnerabilities, complex issues involving the software supply chain, and the growing impact of rapidly accumulating technical debt have led to a build-up of historic software deficiencies.

Microsoft: Microsoft will permanently turn off Exchange Online basic authentication starting early January 2023 to improve security. A future Microsoft Edge update would permanently disable the Internet Explorer 11 desktop web browser on some Windows 10 systems in February. This means that “The out-of-support Internet Explorer 11 (IE11) desktop application is scheduled to be permanently disabled on certain versions of Windows 10 devices on February 14, 2023, through a Microsoft Edge update, not a Windows update as previously communicated”

Google Workplace: Google Workspace Gets Client-Side Encryption in Gmail. Long waited Client-side encryption for Gmail available in beta .
Google is letting businesses try out client-side encryption for Gmail, but it’s probably not coming to personal accounts anytime soon. Google has already enabled optional client-side encryption for many Workspace services.

Passkeys: Google has made passkey support available in the stable version of Chrome. Passkeys use biometric verification to authenticate users and are meant to replace the use of passwords, which can be easily compromised. Passkeys are usable cross-platform with both applications and websites. Passkeys offer the same experience that password autofill does, but provide the advantage of passwordless authentication. They cannot be reused, don’t leak in server breaches, and protect users from phishing attacks. Passkeys are only available for websites that provide support for them, via the WebAuthn API,

War risks: Watch for continued war between Russia and Ukraine real world and cyber world in 2023. Cyber as important as missile defences – an ex-NATO general. The risk of escalation from cyber attacks has never been greater. A cyber attack on the German ports of Bremerhaven or Hamburg would severely impede NATO efforts to send military reinforcements to allies, retired U.S. General Ben Hodges told Reuters.

Cloud takeover: AWS Elastic IP Transfer Feature Gives Cyberattackers Free Range
Threat actors can take over victims’ cloud accounts to steal data, or use them for command-and-control for phishing attacks, denial of service, or other cyberattacks.

ISC: ICS and SCADA systems remain trending attack targets also in 2023.

Code security: Microsoft-owned code hosting platform GitHub has just announced multiple security improvements, including free secret scanning for public repositories and mandatory two-factor authentication (2FA) for developers and contributors. The secret scanning program is meant to help developers and organizations identify exposed secrets and credentials in their code. In 2022, code scanning helped identify 1.7 million potential secrets exposed in public repositories. Now the feature is available for free for all free public repositories, to help prevent secret exposures and secure the open source ecosystem. With secret scanning alerts, you can track and action on leaked secrets directly within GitHub.

Data destruction: We must develop a cloud-compatible way of doing destruction that meets security standards. Maybe cloud providers can come up with a service to provide this capability, since only they have direct access to the underlying hardware. They have never been shy about inventing new services to charge for, and certainly plenty of companies would be eager to pay for such a service, if the appropriate certificates of destruction were provided.

PCI DSS: PCI DSS 4.0 Should Be on Your Radar in 2023 if you work on field that needs to meet that. The latest version of the standard will bring a new focus to an overlooked yet critically important area of security. For a long time, client-side threats, which involve security incidents and breaches that occur on the customer’s computer rather than on the company’s servers or in between the two, were disregarded. But that’s changing with the release of PCI DSS 4.0. Now, many new requirements focus on client-side security.

SHA-1: NIST Retires SHA-1 Cryptographic Algorithm, not fully in 2023, but starts preparations for phase-out. The venerable cryptographic hash function has vulnerabilities that make its further use inadvisable. According to NIST, SHA-1 ‘has reached the end of its useful life’, given that the high computing capabilities of today’s systems can easily attack the algorithm using the technique is referred to as a ‘collision’ attack. SHA-1, whose initials stand for secure hash algorithm, has been in use since 1995 as part of the Federal Information Processing Standard and NIST has announced that SHA-1 should be phased out by Dec. 31, 2030, in favor of the more secure SHA-2 and SHA-3 groups of algorithms. The US National Institute of Standards and Technology (NIST) recommended that IT professionals start replace the 27 years old SHA-1 cryptographic algorithm with newer, more secure ones. Because SHA-1 is used as the foundation of numerous security applications, the phaseout period will take many years. Tech giants such as Google, Facebook, Microsoft and Mozilla have already taken steps to move away from the SHA-1 cryptographic algorithm. Certificate authorities stopped issuing certificates using SHA-1 as of January 1, 2017.

Cloud: Is Cloud Native Security Good Enough? Cloud native technologies enable organizations to tap into the agility required to keep up in the current competitive landscape and to create new business models. But achieving efficient, flexible, distributed and resilient cloud native security is tough. All major public cloud providers -Amazon Web Services (AWS), Microsoft Azure and Google Cloud- of course offer security features and services, which are designed to address significant threats to cloud-based data. However, in spite of this, public cloud providers’ security tools commonly fail to meet operational needs, and their limitations should prompt organizations to consider or reconsider how they are protecting public cloud environments.

Privacy: The Privacy War Is Coming. Privacy standards are only going to increase. It’s time for organizations to get ahead of the coming reckoning.

Ethical hacking: Ethical hacking has become a highly-sought after career route for emerging tech aspirants. The role of ethical hackers enables countless businesses and individuals to improve their security posture and minimize the potential attack risk for organizations. But there are several analysts who believe that becoming a self-taught ethical hacker in 2023 might not be worth it because they are at constant risk of failing to perform properly and many companies might not want to hire an ethical hacker.

MFA: Two factor authentication might not be enough in 2023 for applications that need good security. In the past few months, we’ve seen an unprecedented number of identity theft attacks targeting accounts protected by two-factor authentication (2FA), challenging the perception that existing 2FA solutions provide adequate protection against identity theft attacks. So for some demanding users 2FA is over. Long live 3FA!

Cloud APIs: With Cloud Comes APIs & Security Headaches also in 2023. Web application programming interfaces (APIs) are the glue that holds together cloud applications and infrastructure, but these endpoints are increasingly under attack, with half of companies acknowledging an API-related security incident in the past 12 months. ccording to a survey conducted by Google Cloud, the most troublesome security problems affecting companies’ use of APIs are security misconfigurations, outdated APIs and components, and spam or abuse bots . About 40% of companies are suffering an incident due to misconfiguration and a third coping with the latter two issues. Two-thirds of companies (67%) found API-related security issues and vulnerabilities during the testing phase, but more than three-quarters (77%) have confidence that they will catch issues, saying they have the required API tools and solutions-

Lack of cyber security workers: Businesses need to secure their assets and ensure the continuous readiness of employees to respond to a cyberattack if they want to move forward safely and avoid losses caused by cybercriminals or malicious attackers. There is an acute shortage of cyber security professionals. As Threat Levels remain high, companies and organizations remain on alert – but face ongoing challenges in finding and retaining the right people with the required skill levels. There is a significant skills gap and a clear need for hiring cyber security experts in organizations across the world.

VPN: Is Enterprise VPN on Life Support or Ripe for Reinvention? While enterprise VPNs fill a vital role for business, they have several limitations. To get work-from-anywhere initiatives off the ground quickly and keep their business afloat, many organizations turned to enterprise virtual private networks (VPNs). This allowed them to connect their remote employees to critical business operations at the corporate site. However, as fast as VPNs were deployed, organizations learned their limitations and security risks. So are traditional VPNs really “dead” as some industry analysts and pundits claim? Or do they simply need a refresh? Time will tell, and this will be discussed in 2023.

AI: Corporations have discovered the power of artificial intelligence (A.I.) to transform what’s possible in their operations
But with great promise comes great responsibility—and a growing imperative for monitoring and governance. “As algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention.

AI dangers: Large AI language models have potential dangers. AI is better at fooling humans than ever—and the consequences will be serious. Wired magazine article expects that In 2023, we may well see our first death by chatbot. Causality will be hard to prove was it really the words of the chatbot that put the murderer over the edge? Or perhaps a chatbot has broken someone’s heart so badly they felt compelled to take their own life?

Metaverse: Police Must Prepare For New Crimes In The Metaverse, Says Europol. It encourages law enforcement agencies to start considering the ways in which existing types of crime could spread to virtual worlds, while entirely new crimes could start to appear. ReadPolicing in the metaverse: what law enforcement needs to know report for more information.

Blockchain: Digital products like cryptocurrency and blockchain will affect a company’s risk profile. Boards and management will need to understand these assets’ potential impact and align governance with their overall risk and business strategies. Year 2022 already showed how a lot of cryptocurrency related risks realized. More “Crypto travel rules” enacted to combat money laundering and terrorism financing.

Insurance: Getting a cyber insurance can become harder and more expensive in 2023. Insurance executives have been increasingly vocal in recent years about systemic risks and now increasing cyber was the risk to watch. Spiralling cyber losses in recent years have prompted emergency measures by the sector’s underwriters to limit their exposure. There is growing concern among industry executives about large-scale strikes. As well as pushing up prices, some insurers have responded by tweaking policies so clients retain more losses. There are already insurance policies written in the market have an exemption for state-backed attacks, but but the difficulty of identifying those behind attacks and their affiliations makes such exemptions legally fraught. The chief executive of one of Europe’s biggest insurance companies has warned that cyber attacks, rather than natural catastrophes, will become “uninsurable” as the disruption from hacks continues to grow. Recent attacks that have disrupted hospitals, shut down pipelines and targeted government department. “What if someone takes control of vital parts of our infrastructure, the consequences of that?” In September, the US government called for views on whether a federal insurance response to cyber was warranted.

Sources:

Asiantuntija neuvoo käyttämään pilkkua sala­sanassa – taustalla vinha logiikka

Overseeing artificial intelligence: Moving your board from reticence to confidence

Android is adding support for updatable root certificates amidst TrustCor scare

Google Play now lets children send purchase requests to guardians

Diligent’s outlook for 2023: Risk is the trend to watch

Microsoft will turn off Exchange Online basic auth in January

Google is letting businesses try out client-side encryption for Gmail

Google Workspace Gets Client-Side Encryption in Gmail

The risk of escalation from cyberattacks has never been greater

Client-side encryption for Gmail available in beta

AWS Elastic IP Transfer Feature Gives Cyberattackers Free Range

Microsoft: Edge update will disable Internet Explorer in February

Is Cloud Native Security Good Enough?

The Privacy War Is Coming

Top Reasons Not to Become a Self-Taught Ethical Hacker in 2023

Google Chrome preparing an option to block insecure HTTP downloads

Cyber attacks set to become ‘uninsurable’, says Zurich chief

The Dark Risk of Large Language Models

Police Must Prepare For New Crimes In The Metaverse, Says Europol

Policing in the metaverse: what law enforcement needs to know

Cyber as important as missile defences – an ex-NATO general

Misconfigurations, Vulnerabilities Found in 95% of Applications

Mind the Gap

Yritysten kyberturvassa edelleen isoja aukkoja Asiantuntija: Kysymys jopa kansallisesta turvallisuudesta

Personnel security in the cloud

Multi-factor auth fatigue is real – and it’s why you may be in the headlines next

MFA Fatigue attacks are putting your organization at risk

Cybersecurity: Parliament adopts new law to strengthen EU-wide resilience | News | European Parliament

NIS2 hyväksyttiin – EU-maille tiukemmat kyberturvavaatimukset

Cybersecurity Investments in the EU: Is the Money Enough to Meet the New Cybersecurity Standards?

Poor software costs the US 2.4 trillion

Passkeys Now Fully Supported in Google Chrome

Google Takes Gmail Security to the Next Level with Client-Side Encryption

Executives take more cybersecurity risks than office workers

NIST Retires SHA-1 Cryptographic Algorithm

NIST to Retire 27-Year-Old SHA-1 Cryptographic Algorithm

WatchGuard Threat Lab Report Finds Top Threat Arriving Exclusively Over Encrypted Connections

Over 85% of Attacks Hide in Encrypted Channels

GitHub Announces Free Secret Scanning, Mandatory 2FA

Leaked a secret? Check your GitHub alerts…for free

Data Destruction Policies in the Age of Cloud Computing

Why PCI DSS 4.0 Should Be on Your Radar in 2023

2FA is over. Long live 3FA!

Google: With Cloud Comes APIs & Security Headaches

Digesting CISA’s Cross-Sector Cybersecurity Performance Goals

Zero Trust Shouldnt Be The New Normal

Don’t click too quick! FBI warns of malicious search engine ads

FBI Recommends Ad Blockers as Cybercriminals Impersonate Brands in Search Engine Ads

Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users

Kyberturvan ammattilaisista on huutava pula

Is Enterprise VPN on Life Support or Ripe for Reinvention?

Cyber as important as missile defences – an ex-NATO general

1,768 Comments

  1. Tomi Engdahl says:

    Terrifying study shows how fast AI can crack your passwords; here’s how to protect yourself
    https://9to5mac.com/2023/04/07/ai-cracks-passwords-this-fast-how-to-protect/

    Reply
  2. Tomi Engdahl says:

    JAMES CAMERON WORKING ON NEW “TERMINATOR” SCRIPT INSPIRED BY RISE OF ACTUAL AI
    https://futurism.com/the-byte/james-cameron-terminator-script-actual-ai

    Reply
  3. Tomi Engdahl says:

    Pentagon Leaks Emphasize the Need for a Trusted Workforce
    Tightening access controls and security clearance alone won’t prevent insider threat risks motivated by lack of trust or loyalty.
    https://www.darkreading.com/vulnerabilities-threats/pentagon-leaks-emphasize-the-need-for-a-trusted-workforce

    Reply
  4. Tomi Engdahl says:

    Sniffnet: An Interesting Open-Source Network Monitoring Tool Anyone Can Use
    Take a glance at your network connection with this handy app.
    https://news.itsfoss.com/sniffnet/

    Reply
  5. Tomi Engdahl says:

    New GobRAT Remote Access Trojan Targeting Linux Routers in Japan
    https://thehackernews.com/2023/05/new-gobrat-remote-access-trojan.html

    Reply
  6. Tomi Engdahl says:

    ‘Hot Pixel’ Attack Steals Data From Apple, Intel, Nvidia, and AMD Chips via Frequency, Power and Temperature Info
    By Paul Alcorn published 7 days ago
    DVFS mechanisms can be exploited to steal data.
    https://www.tomshardware.com/news/hot-pixel-attack-steals-data-from-apple-and-nvidia-chips-using-frequency-power-and-temperature-info

    Reply
  7. Tomi Engdahl says:

    Raspberry Pi Malware Infects Using Default Username and Password
    By Ash Hill published 5 days ago
    Think your Pi project is small and insignificant? This trojan doesn’t care.
    https://www.tomshardware.com/news/raspberry-pi-malware-uses-default-credentials

    Reply
  8. Tomi Engdahl says:

    Cyber Flight Path Navigator – A Step Towards Building A Cyber Career Pathway
    https://seanmitchell.tech/cyber-flight-path-navigator/

    Reply
  9. Tomi Engdahl says:

    Näinkö huijaussoitot saadaan loppumaan? Suomessa kehitetty ratkaisu kerää huomiota maailmalla
    30.5.202310:08|päivitetty30.5.202310:11
    Suomalaisten viranomaisten ja yritysten yhteistyö puhelinnumeroväärennösten estämisessä sai huomiota yhdessä maailman suurimmista tietoturvatapahtumista.
    https://www.mikrobitti.fi/uutiset/nainko-huijaussoitot-saadaan-loppumaan-suomessa-kehitetty-ratkaisu-keraa-huomiota-maailmalla/0c0e1929-b6ae-4ad8-b683-4a78f7c3f3dd

    Reply
  10. Tomi Engdahl says:

    US law enforcement agencies have apparently used the algorithm to perform nearly one million searches.

    AI COMPANY SCRAPED 30 BILLION FACEBOOK PHOTOS SO COPS CAN FACIAL ID ANYONE
    https://futurism.com/the-byte/ai-30-billion-facebook-photos-cops-facial-id

    “CLEARVIEW IS A TOTAL AFFRONT TO PEOPLES’ RIGHTS, FULL STOP, AND POLICE SHOULD NOT BE ABLE TO USE THIS TOOL.”

    Clearview AI, the company behind a widely-used facial recognition technology that has already led American police to charge innocent people with crimes they didn’t commit — claims to have scraped 30 billion Facebook photos in order to train its AI algorithm, according to comments that CEO Hoan Ton-That provided to the BBC last week.

    Clearview AI used nearly 1m times by US police, it tells the BBC
    https://www.bbc.com/news/technology-65057011

    Reply
  11. Tomi Engdahl says:

    NSA and FBI: Kimsuky hackers pose as journalists to steal intel https://www.bleepingcomputer.com/news/security/nsa-and-fbi-kimsuky-hackers-pose-as-journalists-to-steal-intel/

    State-sponsored North Korean hacker group Kimsuky (a.ka. APT43) has been impersonating journalists and academics for spear-phishing campaigns to collect intelligence from think tanks, research centers, academic institutions, and various media organizations.

    A joint advisory from the Federal Bureau of Investigation (FBI), the U.S.
    Department of State, the National Security Agency (NSA), alongside South Korea’s National Intelligence Service (NIS), National Police Agency (NPA), and Ministry of Foreign Affairs (MOFA), notes that Kimsuky is part of North Korea’s Reconnaissance General Bureau (RGB).

    “Some targeted entities may discount the threat posed by these social engineering campaigns, either because they do not perceive their research and communications as sensitive in nature, or because they are not aware of how these efforts fuel the regime’s broader cyber espionage efforts,” reads the advisory.

    Reply
  12. Tomi Engdahl says:

    Camaro Dragon Strikes with New TinyNote Backdoor for Intelligence Gathering https://thehackernews.com/2023/06/camaro-dragon-strikes-with-new-tinynote.html

    The Chinese nation-stage group known as Camaro Dragon has been linked to yet another backdoor that’s designed to meet its intelligence-gathering goals.

    Israeli cybersecurity firm Check Point, which dubbed the Go-based malware TinyNote, said it functions as a first-stage payload capable of “basic machine enumeration and command execution via PowerShell or Goroutines.”

    What the malware lacks in terms of sophistication, it makes up for it when it comes to establishing redundant methods to retain access to the compromised host by means of multiple persistency tasks and varied methods to communicate with different servers.

    Reply
  13. Tomi Engdahl says:

    Windows 11 to require SMB signing to prevent NTLM relay attacks https://www.bleepingcomputer.com/news/security/windows-11-to-require-smb-signing-to-prevent-ntlm-relay-attacks/

    Microsoft says SMB signing (aka security signatures) will be required by default for all connections to defend against NTLM relay attacks, starting with today’s Windows build (Enterprise edition) rolling out to Insiders in the Canary Channel.

    In such attacks, threat actors force network devices (including domain
    controllers) to authenticate against malicious servers under the attackers’
    control to impersonate them and elevate privileges to gain complete control over the Windows domain.

    “This changes legacy behavior, where Windows 10 and 11 required SMB signing by default only when connecting to shares named SYSVOL and NETLOGON and where Active Directory domain controllers required SMB signing when any client connected to them,” Microsoft said.

    Reply
  14. Tomi Engdahl says:

    Microsoft is killing Cortana on Windows starting late 2023 https://www.bleepingcomputer.com/news/microsoft/microsoft-is-killing-cortana-on-windows-starting-late-2023/

    After introducing a string of AI-powered assistants for its products, Microsoft has now announced that it will soon end support for the Windows standalone Cortana app.

    Cortana provides voice-based assistance and a wide range of tasks, such as setting reminders, creating calendar events, providing weather updates, and searching the web.

    Initially introduced as part of the Windows Phone operating system, Cortana has since expanded to other platforms, including Windows 10, Android, and iOS.
    It’s now deeply integrated into Microsoft’s ecosystem and was designed to work closely with other Microsoft products.

    According to a support document published on Wednesday and spotted by Windows Central, the personal productivity assistant will be retired in late 2023, 8 years after its inclusion in Windows 10 in 2015.

    Reply
  15. Tomi Engdahl says:

    Detecting DLL Injection in Windows | by Suprajabaskaran | May, 2023 | InfoSec Write-ups
    https://infosecwriteups.com/detecting-dll-injection-in-windows-804e065f5eb7
    I am writing this post to introduce my Python-based tool used to detect any potential DLL injection in the Windows operating system. This is the continuation of the previous post on Ghidra
    We built our application as a proposal to be a part of Ghidra’s auto-analyze functionality. Our application makes use of certain Ghidra functionalities to help better in analyzing an executable or a binary. Our main objective is to identify the potential DLL injection caused by the malware binary, which could lead to several serious security consequences, such as privilege escalation, process, and system crashing.
    If an executable or a binary makes use of custom DLLs and harmful function calls that are specifically used for injection exploits, then we infer that the binary might cause potential DLL injection and provide a warning to the user.
    DLL injection is a technique used in software development and security testing to execute code within the address space of a running process. This is achieved by making a target process load a malicious or benign DLL into its address space, where the code can then be executed by the process. DLL injection can be used for a variety of purposes, including debugging, hooking system functions, and malware attacks, making it both a powerful tool and a potential security vulnerability.

    Reply
  16. Tomi Engdahl says:

    Insider Q&A: Artificial Intelligence and Cybersecurity In Military Tech
    https://www.securityweek.com/insider-qa-artificial-intelligence-and-cybersecurity-in-military-tech/

    Shift5 founder Josh Lospinoso discusses AI and how software vulnerabilities in weapons systems are a major threat to the U.S. military.

    Q: In your testimony, you described two principal threats to AI-enabled technologies: One is theft. That’s self-explanatory. The other is data poisoning. Can you explain that?

    A: One way to think about data poisoning is as digital disinformation. If adversaries are able to craft the data that AI-enabled technologies see, they can profoundly impact how that technology operates.

    Q: Is data poisoning happening?

    A: We are not seeing it broadly. But it has occurred. One of the best-known cases happened in 2016. Microsoft released a Twitter chatbot it named Tay that learned from conversations it had online. Malicious users conspired to tweet abusive, offensive language at it. Tay began to generate inflammatory content. Microsoft took it offline.

    Q: AI isn’t just chatbots. It has long been integral to cybersecurity, right?

    A: AI is used in email filters to try to flag and segregate junk mail and phishing lures. Another example is endpoints, like the antivirus program on your laptop – or malware detection software that runs on networks. Of course, offensive hackers also use AI to try defeat those classification systems. That’s called adversarial AI.

    Q: Let’s talk about military software systems. An alarming 2018 Government Accountability Office report said nearly all newly developed weapons systems had mission critical vulnerabilities. And the Pentagon is thinking about putting AI into such systems?

    A: There are two issues here. First, we need to adequately secure existing weapons systems. This is a technical debt we have that is going to take a very long time to pay. Then there is a new frontier of securing AI algorithms – novel things that we would install. The GAO report didn’t really talk about AI. So forget AI for a second. If these systems just stayed the way that they are, they’re still profoundly vulnerable.

    Reply
  17. Tomi Engdahl says:

    Artificial Intelligence
    OpenAI Unveils Million-Dollar Cybersecurity Grant Program
    https://www.securityweek.com/openai-unveils-million-dollar-cybersecurity-grant-program/

    OpenAI plans to shell out $1 million in grants for projects that empower defensive use-cases for generative AI technology.

    Reply
  18. Tomi Engdahl says:

    Useimpien yritysten OT-järjestelmään on murtauduttu
    https://etn.fi/index.php/13-news/15050-useimpien-yritysten-ot-jaerjestelmaeaen-on-murtauduttu-2

    Operatiivisesta teknologiasta (OT) on tullut tärkeä osa kyberturvallisuustyötä organisaatioissa. Tietoturvayritys Fortinetin kyselyn perusteella vuonna 2022 kuusi prosenttia organisaatioista raportoi, että niiden järjestelmiin ei ollut onnistuttu tunkeutumaan, vuonna 2023 luku on noussut jo 25 prosenttiin.

    Vaikka tilanne on vuodessa parantunut, parantamisen varaa on edelleen, jos kolmen neljästä yrityksestä OT-verkkoon on onnistuttu murtautumaan viimeisen vuoden aikana. Toisaalta OT-verkkoihin liitetään koko ajan lisää laitteita.

    - Paikallisten yritysten kanssa käymiemme keskustelujen perusteella myös verkkoon yhdistetyt OT-laitteet ovat yleistyneet Suomessa räjähdysmäisesti. IT ja OT lähenevät toisiaan. Kaikki on yhteydessä verkkoon, ja esimerkiksi reaaliaikaisen data-analyysin mahdollistavaa dataa jaetaan enemmän kuin koskaan. Tämä tarkoittaa, että mahdollinen hyökkäysvektori kasvaa, ja sen myötä kasvavat myös riskit, sanoo Suomen Fortinetin teknologiajohtaja Jani Ekman.

    OT-verkkoihin murtaudutaan samoilla menetelmillä kuin IT-verkkoihin. Tunkeutumiset johtuivat tyypillisimmin erilaisista haittaohjelmista (56 %) ja tietojen kalastelusta (49 %). Kolmannes vastaajista ilmoitti joutuneensa viime vuonna kiristysohjelmahyökkäyksen uhriksi.

    Ekmanin mukaan OT-tietoturvasta huolehtiminen vaatii toisenlaista ajattelutapaa. – Monien laitteiden ennustettu käyttöikä on yli 25 vuotta, eikä niitä ole suunniteltu vastaamaan nykypäivän kyberturvallisuusympäristön haasteisiin. Monet laitteista ovat osa liiketoiminnan kannalta kriittistä tuotantoa, eikä niitä voi noin vain poistaa verkosta korjaustiedostojen asentamista varten.

    Reply
  19. Tomi Engdahl says:

    Traficom varoittaa: myös tekstarit ja puhelut voivat tulla huijareilta
    https://etn.fi/index.php/13-news/15041-traficom-varoittaa-myoes-tekstarit-ja-puhelut-voivat-tulla-huijareilta

    Rikolliset esiintyvät usein pankkien nimissä ja huijaustavat kehittyvät jatkuvasti. Liikenne- ja viestintävirasto Traficomin Kyberturvallisuuskeskus varoittaa, että vanhojen sähköpostien lisäksi myös tekstiviestejä ja puheluita käytetään huijaamiseen.

    Reply
  20. Tomi Engdahl says:

    Kyberuhkat iskevät seuraavaksi autoihin
    https://www.uusiteknologia.fi/2023/06/02/kyberuhkat-iskevat-seuraavaksi-autoihin/

    Tietoturvayhtiö Trend Micro arvioi kyberuhkien kasvavan seuraavien 3-5 vuoden kuluessa merkittävästi. Kehitystä voimistaa elektroniikan lisääntyminen autoissa ja uusien entistä älykkäämpien sähköautojen tulo markkinoille.

    Kyseessä voi olla uudenlainen kyberrikollisuuden ja perinteisen fyysisen rikollisuuden risteytys, jonka kaltaisia olemme nähneet aikaisemmin esimerkiksi pankkiautomaatteihin murtautumisten yhteydessä. Siksi Salmisen mukaan on tärkeää, että autoteollisuus aloittaa tällaisiin tilanteisiin varautumisen ajoissa, jotta tietoverkkoihin kytketyt autot olisivat mahdollisimman suojattuja tositilanteen tullen.

    Pääsy ajoneuvon käyttäjätilille saattaa mahdollistaa

    Auton lukituksen avaamisen ja käynnistämisen etänä
    Auton ovien avaamisen ja arvoesineiden ryöstämisen
    Auton käyttämisen rikoksiin, kuten huumekauppoihin tai tunkeutumiseen ryöstön kohteeseen ajamalla auto kohteen ikkunaan tai oveen
    Auton pilkkomisen ja myymisen mustan pörssin varaosiksi
    Auton paikantamisen ja kulkureittien tarkkailun, jonka tarjoamien tietojen avulla rikolliset voivat paikantaa omistajan kodin ja selvittää milloin tämä ei ole kotona

    Reply
  21. Tomi Engdahl says:

    What if the Current AI Hype Is a Dead End?
    https://www.securityweek.com/what-if-the-current-ai-hype-is-a-dead-end/

    If we should face a Dead-End AI future, the cybersecurity industry will continue to rely heavily on traditional approaches, especially human-driven ones. It won’t quite be business as usual though.

    AI Future #1: Dead End AI

    The hype man’s job is to get everybody out of their seats and on the dance floor to have a good time.

    Flavor Flav

    This week we posit a future we’re calling “Dead End AI”, where AI fails to live up to the hype surrounding it. We consider two possible scenarios in such a future. Both have similar near to mid term outcomes, so we can discuss them together.

    Scenario #1: AI ends up another hype like crypto, NFT’s and the Metaverse.

    Scenario #2: AI is overhyped and the resulting disappointment leads to defunding and a new AI winter.
    Advertisement. Scroll to continue reading.
    CISO Forum

    In a Dead-End AI future, the hype currently surrounding artificial intelligence ultimately proves to be unfounded. The excitement and investment in AI dwindle as the reality of the technology’s limitations sets in. The AI community experiences disillusionment, leading to a new AI winter where funding and research are significantly reduced.

    Economic factors

    Investors are rushing into Generative AI, with early-stage startup investors investing $2.2B in 2022 (contrast this with $5.8B for the whole of Europe). But if AI fails to deliver the expected return on investment it will be catastrophic for further funding for AI research and development.

    The venture capital firm Andreessen-Horowitz (a16z) for example published a report stating that a typical startup working with large language models is spending 80% of their capital on compute costs. The report authors also state that a single GPT-3 training cost ranges from $500,000 to $4.6 million, depending on hardware assumptions.

    Paradoxically, investing these moonshot amounts of money won’t necessarily guarantee economic success or viability, with a leaked Google report recently arguing that there is no moat against general and open-source adoption of these sort of models. Others, like Snapchat, rushed to market prematurely with an offering, only to crash and burn.

    High development costs like that, together with the absence of profitable applications will not make investors or shareholders happy. It also results in capital destruction on a massive scale, making only a handful of cloud and hardware providers happy.

    Limited progress in practical applications

    While we have made significant advancements in narrow AI applications, we have not seen progress towards true artificial general intelligence (AGI), despite unfounded claims that it may somehow arise emergently. Generative AI models have displayed uncanny phenomena, but they are entirely explainable, including their limitations.

    In between the flood of articles gushing about how AI is automating everything in marketing, development and design, there is also a growing trickle of evidence that the field of application for these sort of models may be quite narrow. Automation in real-world scenarios requires a high degree of accuracy and precision, for example when blocking phishing attempts, that LLM’s aren’t designed for.

    Some technical experts are already voicing concern about the vast difference in what the current models actually do compared with how they are being described and more importantly, sold, and are already sounding the alarm about a new AI winter.

    Privacy and ethical concerns

    Another set of growing signals is for the increasing concerns around privacy, ethics, and the potential misuse of AI systems. There are surprisingly many voices arguing for stricter regulations, which could hinder AI development and adoption, resulting in a dead-end AI scenario.

    Geoffrey Hinton, one of the pioneers in artificial neural networks, recently quit his job with Google to be able to warn the world about what he feels are the risks and danger of uncontrolled AI without any conflicts of interest. The Whitehouse called a meeting with executives from Google, Microsoft, OpenAI, and Anthropic to discuss the future of AI. The biggest surprise is probably a CEO asking to be regulated, something that OpenAI’s Sam Altman urged US Congress to do. One article even goes as far as advocating that we need to evaluate the beliefs of people in control of such technologies, suggesting that they may be more willing to accept existential risks.

    Environmental Impact

    The promise of AI is not just based on automation – it is also has to be cheap, readily available and increasingly, sustainable. AI may be technically feasible, but it may be uneconomic, or even bad for the environment.

    A lot of the data that is available indicates that AI technology like LLM’s have a considerable environmental impact. A recent study, “Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models”, calculated that a typical conversation of 20 – 50 questions consumes 500ml of water, and that it may have needed up to 700,000 liters of water just to train GPT-3.

    Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) 2023 Artificial Intelligence Index Report concluded that a single training run for GPT3 put out the equivalent of 502 tons of CO2, with even the most energy efficient model, BLOOM, emitting more carbon than the average American uses per year (25 tons for BLOOM, versus 18 for a human).

    Implications for Security Operations

    If the current wave of AI technologies is being woefully overhyped and turns out to be a dead end, the implications for security operations could be as follows:

    Traditional methods will come back into focus.

    With AI failing to deliver on its promise of intelligent automation and analytics, cybersecurity operations will continue to rely on human-driven processes and traditional security measures.

    This means that security professionals will have to keep refining existing techniques like zero-trust and cyber hygiene. They will also have to continue to create and curate an endless stream of up-to-date detections, playbooks, and threat intelligence to keep pace with the ever-evolving threat landscape.

    Automation will plateau.

    Without more intelligent machine automation, organizations will continue to struggle with talent shortages in the cybersecurity field. For analysts the manual workload will remain high. Organizations will need to find other ways to streamline operations.

    Automation approaches like SOAR will remain very manual, and still be based on static and preconfigured playbooks. No- and Low-code automation may help make automation easier and accessible, but automation will remain essentially scripted and dumb.

    However – even today’s level of LLM capability is already sufficient to automate basic log parsing, event transformation, and some classification use-cases. These sorts of capabilities will be ubiquitous by the end of 2024 in almost all security solutions.

    Threat detection and response will remain slow

    In the absence of AI-driven solutions, threat detection and response times can improve only marginally. Reducing the window of opportunity that hackers must exploit vulnerabilities and cause damage will mean becoming operationally more effective. Organizations will have to focus on enhancing their existing systems and processes to minimize the impact of slow detection and response times. Automation will be integrated more selectively but aggressively.

    Threat intelligence will continue to be hard to manage.

    With the absence of AI-driven analysis, it will continue to be difficult to gather and curate threat intelligence for vendors and remain challenging to use more strategically for most end users. Security teams will have to rely on manual processes to gather, analyze, and contextualize threat information, potentially leading to delays in awareness of and response to new and evolving threats. The ability to disseminate and analyze large amounts of threat intelligence will have to be enhanced using simpler means, for example with visualizations and graph analysis. Collective and collaborative intelligence sharing will also need to be revisited and revised.

    Renewed emphasis on human expertise

    If AI fails to deliver, the importance of human expertise in cybersecurity operations will become even more critical. Organizations will need to continue to prioritize hiring, training, and retaining skilled cybersecurity professionals to protect their assets and minimize risks.

    Reply
  22. Tomi Engdahl says:

    Supply Chain Security
    SBOMs – Software Supply Chain Security’s Future or Fantasy?
    https://www.securityweek.com/sboms-software-supply-chain-securitys-future-or-fantasy/

    If after eighteen months, meaningful use of SBOMs is unachievable, we need to ask what needs to be done to fulfill Biden’s executive order.

    Two years after the requirement for Software Bills of Materials (SBOMs) were announced, we are nowhere near achieving them. Are SBOMs an achievable dream, or an elusive fantasy?

    President Biden’s cybersecurity executive order of May 2021 introduced the concept of the mandatory software bill of materials (SBOM). The intention can be summarized quite simply – to provide transparency and visibility into the components used within new software, and thereby improve the security of the software supply chain.

    Eighteen months later, in December 2022, a big tech lobbying group representing Amazon, Apple, Cisco, Google, IBM, Intel, Mastercard, Meta, Microsoft, Samsung, Siemens, Verisign and more, wrote to the OMB: “We ask that OMB discourage agencies from requiring artifacts [SBOMs} until there is a greater understanding of how they ought to be provided and until agencies are ready to consume the artifacts that they request.”

    If after eighteen months – and we’re still in the same position today – meaningful use of SBOMs is unachievable, we need to ask what needs to be done to fulfill Biden’s executive order.

    The function of the SBOM

    The purpose of the SBOM is to improve the security of the software supply chain by providing visibility into areas of an application that are otherwise hidden. It provides details on every code component that is used in an application. “It builds out a list of all the packages and shared libraries used in each application, along with their version number,” explains Matt Psencik, director at Tanium.

    “If a vulnerability is released for a specific package, you can either update that package, remove it, or contact a vendor to see if a new patch is available to remediate the vulnerability,” he continues.

    The SBOM is designed to provide details on every code component included in the application, whether commercial software components, open source software (OSS) libraries and dependencies, or any in-house developed libraries. “This information can be used to prioritize security patches and updates, track and manage vulnerabilities, and monitor compliance with relevant regulations and standards,” says Anthony Tam, manager of security engineering at Tigera.

    The usual analogy is with a list of ingredients on a food product. Knowing the ingredients allows the purchaser to detect any risks involved in consuming the product. But the SBOM goes much deeper and provides significantly more data than a food ingredients list.

    While it details every component from whatever source, the primary target and biggest problem in the software supply chain is OSS. OSS is also the biggest problem and hindrance for the SBOM project.

    The OSS problem

    OSS is pervasive. It is unlikely that any new application is built without using OSS components. They are free and readily available, and they help developers build their new applications faster.

    The problem is there is so much OSS, it is often developed by a single person or small team of collaborators (generally unpaid), and each OSS library is often further dependent on the use of other libraries or components that have their own further dependencies. It is uneconomical in a free market’s need for speed economy to expect application developers to know what dependencies are pulled into their own application or even which parts of an OSS library are used or unused by their application.

    Consider these figures. According to GitHub’s Octoverse 2022 report, there were 52 million new open source projects on GitHub in 2022, with developers across GitHub making more than 413 million contributions to open source projects throughout the year. Sourceforge hosts a further 500,000 OSS projects, while Apache hosts another 350 projects. And there are other sources.

    The developers of these projects tend to be coders, not security specialists. In theory, the open nature of the code allows third party researchers to examine the code for bugs, vulnerabilities, and malware. Unfortunately, the same openness allows attackers to find those vulnerabilities and sometimes insert their own malware.

    The SBOM is designed to make OSS problems more visible to commercial software application developers, and application buyers and users. But the sheer scale of the OSS market explains the difficulties. It is a major reason for the slow progress of the SBOM project.

    Current state of the SBOM

    One of the perceived problems in the evolution of the SBOM is there is no precise specification of what it should provide, or in what format, nor how it should be interpreted and used. Perhaps the closest is the NTIA’s Minimum Elements for a Software Bill of Materials published in July 2021. But as this document (PDF) concludes, “The minimum elements of an SBOM are a starting point… the Federal Government should encourage or develop resources on how to implement SBOMs.”

    In April 2023, CISA released three SBOM-related documents. The first was titled the Sharing Lifecycle Report. It shows how an SBOM moves from the author to the consumer, and how the SBOM can lead to product enrichment in the process.

    But the document doesn’t specify how the SBOM should be created, how it should be shared, nor how it should be interpreted.

    Two further documents published in April are Types of Software Bill of Material (SBOM) Documents, and Minimum Requirements for Vulnerability Exploitability eXchange (VEX).

    Throughout these documents there are options the software author can use in the production and consumption of SBOMs, but nothing to say what the author should or must be doing. It is this lack of instruction, this allowance of market forces to phrase and solve the problem, that is perhaps the biggest current drag on the SBOM project.

    https://www.ntia.doc.gov/files/ntia/publications/sbom_minimum_elements_report.pdf

    Reply
  23. Tomi Engdahl says:

    The 2023 State of Ransomware in Education: 84% increase in attacks over 6-month period https://www.malwarebytes.com/blog/threat-intelligence/2023/06/the-2023-state-of-ransomware-in-education-84-increase-in-known-attacks-over-6-month-period

    Ransomware gangs have made the past year a hard one for the education sector.

    Between June 2022 and May 2023, there were 190 known ransomware attacks against educational institutions, and many more that went unreported and unrecorded. Between the first and second six months of that period, education experienced an 84% increase in attacks.

    Reply
  24. Tomi Engdahl says:

    Adversaries increasingly using vendor and contractor accounts to infiltrate networks https://blog.talosintelligence.com/vendor-contractor-account-abuse/

    The software supply chain has become a key security focus for many organizations, but the risks associated with supply chain attacks are often misunderstood.

    Successful software-focused supply chain attacks can give an adversary access to dozens or even hundreds of victims, but they are resource-intensive and require an extensive understanding of the target environment, the build process, and the software itself.

    Reply
  25. Tomi Engdahl says:

    Verizon 2023 DBIR: Human Error Involved in Many Breaches, Ransomware Cost Surges
    https://www.securityweek.com/verizon-2023-dbir-human-error-involved-in-many-breaches-ransomware-cost-surges/

    Verizon’s 16th annual Data Breach Investigations Report (DBIR) provides data on ransomware costs, the frequency of human error in breaches, and BEC trends.

    Verizon on Tuesday published its 16th annual Data Breach Investigations Report (DBIR) to provide organizations with useful information collected from incidents investigated by its Threat Research Advisory Center.

    The DBIR is one of the cybersecurity industry’s most anticipated reports due to the fact that it’s based on the analysis of a significant number of real-world incidents. For the 2023 DBIR, Verizon analyzed more than 16,000 security incidents and roughly 5,200 breaches.

    The report shows — based on data from the FBI — that the median cost of ransomware incidents has more than doubled over the past two years, to $26,000. Losses were only reported in 7% of cases, with victims losing between $1 and $2.25 million.

    Ransomware accounted for 24% of cybersecurity incidents analyzed by Verizon. The company saw the number of ransomware attacks being higher in the past two years than in the previous five years combined.

    Reply
  26. Tomi Engdahl says:

    AntChain, Intel Create New Privacy-Preserving Computing Platform for AI Training
    https://www.securityweek.com/antchain-intel-create-new-privacy-preserving-computing-platform-for-ai-training/

    AntChain has teamed up with Intel for a Massive Data Privacy-Preserving Computing Platform (MAPPIC) for AI machine learning

    AntChain has teamed up with Intel to create a privacy-preserving computing platform designed for machine learning.

    The new AntChain Massive Data Privacy-Preserving Computing Platform (MAPPIC) leverages trusted execution environment (TEE) technology and provides large-scale AI training data protection capabilities.

    AntChain is a blockchain technology brand of Ant Group, a Chinese company owned by tech giant Alibaba.

    The MAPPIC SaaS platform relies on Intel’s Software Guard Extensions (SGX) security technology, the chip giant’s BigDL open source distributed deep learning libraries, as well as the Ant Group’s Occlum operating system and AntChain services for application security audits and distributed key management.

    In the future, AntChain intends to add other security technologies, such as Intel’s Trusted Domain Extensions (TDX).

    “Over the past several years, Ant Group has been dedicated to the exploration and development of privacy-preserving computing technologies such as TEE, to enable secure and reliable industry collaboration in Web3, AI and other technology areas,” said Zhang Hui, CTO of the digital technology business at Ant Group.

    The Ant Group claims to have filed more than 1,000 patent applications for privacy preserving computation technologies last year.

    Reply
  27. Tomi Engdahl says:

    Nokia: bottien määrä kasvanut hurjasti
    https://etn.fi/index.php/13-news/15056-nokia-bottien-maeaerae-kasvanut-hurjasti

    Nokian uudessa Threat Intelligence -raportissa havaittiin, että bottiverkoista tulevan palvelunestohyökkäysten volyymi on viisinkertaistunut viimeisen vuoden aikana. Liikenne on peräisin suuresta määrästä turvattomia IoT-laitteita ja hyökkäysten tarkoituksena on häiritä miljoonien käyttäjien tietoliikennepalveluita.

    Bottihyökkäysten määrä kasvu on seurausta Venäjän hyökkäyksestä Ukrainaan ja kyberrikollisten operoimien voittoa tavoittelevien hakkeriryhmien määrän kasvusta. Kehitystä on myös vauhdittanut kuluttajien lisääntynyt IoT-laitteiden käyttö ympäri maailmaa.

    Raportin mukaan bottiverkkopohjaisiin DDoS-hyökkäyksiin osallistuvien IoT-laitteiden määrä nousi vuoden takaisesta noin 200 000:sta noin miljoonaan laitteeseen. Nämä miljoona saastunutta bottia tuottaa yli 40 prosenttia kaikesta DDoS-liikenteestä nykyään.

    Raportin havainnot perustuvat tietoihin, jotka on kerätty verkkoliikenteen seurannasta yli 200 miljoonalla laitteella, joissa Nokia NetGuard Endpoint Security -tuote on käytössä. Raporttiin voi tutustua täällä.

    https://www.nokia.com/networks/security-portfolio/threat-intelligence-report/

    Reply
  28. Tomi Engdahl says:

    Sextortionists are making AI nudes from your social media images https://www.bleepingcomputer.com/news/security/sextortionists-are-making-ai-nudes-from-your-social-media-images/

    The Federal Bureau of Investigation (FBI) is warning of a rising trend of malicious actors creating deepfake content to perform sextortion attacks.

    Sextortion is a form of online blackmail where malicious actors threaten their targets with publicly leaking explicit images and videos they stole (through
    hacking) or acquired (through coercion), typically demanding money payments for withholding the material.

    In many cases of sextortion, compromising content is not real, with the threat actors only pretending to have access to scare victims into paying an extortion demand.

    ***

    Reply
  29. Tomi Engdahl says:

    Government
    US, Israel Provide Guidance on Securing Remote Access Software
    https://www.securityweek.com/us-israel-provide-guidance-on-securing-remote-access-software/

    US and Israeli government agencies have published new guidance on preventing malicious exploitation of remote access software.

    US and Israeli government agencies have published a new guide to help organizations secure remote access software against malicious attacks.

    The new document provides an overview of remote access software, its malicious use, and detection methods, along with recommendations for organizations to prevent abuse.

    The Guide to Securing Remote Access Software (PDF) is authored by the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the National Security Agency (NSA), the Multi-State Information Sharing and Analysis Center (MS-ISAC), and the Israel National Cyber Directorate (INCD). Cybersecurity vendors and tech companies also contributed to the document.

    Remote access software, including remote administration and remote monitoring and management (RMM) solutions, allows organizations to remotely monitor networks and devices and helps them maintain and improve information technology (IT), industrial control system (ICS), and operational technology (OT) services.

    IT help desks, managed service providers (MSPs), network administrators, and software-as-a-service (SaaS) providers, use such software to gather data on networks and devices, automate maintenance, and perform endpoint configuration, recovery and backup, and patch management.

    GUIDE TO SECURING REMOTE ACCESS SOFTWARE
    https://www.cisa.gov/sites/default/files/2023-06/Guide%20to%20Securing%20Remote%20Access%20Software_508c.pdf

    Reply
  30. Tomi Engdahl says:

    Stay Focused on What’s Important
    https://www.securityweek.com/stay-focused-on-whats-important/

    Staying the course and sticking to strategic goals allows security professionals to steadily and continually improve the security posture of their organization.

    A few months ago, I found myself perusing a more than ample hotel breakfast buffet in search of a tasty breakfast. Fortunately, it was not difficult to assemble a plate that did not disappoint in the least. In fact, the food was so interesting to many of the hotel guests that one person became so fixated on the buffet that they plowed right into me

    Reply
  31. Tomi Engdahl says:

    OWASP’s 2023 API Security Top 10 Refines View of API Risks
    https://www.securityweek.com/owasps-2023-api-security-top-10-refines-view-of-api-risks/

    OWASP’s ranking for the major API security risks in 2023 has been published. The list includes many parallels with the 2019 list, some reorganizations/redefinitions, and some new concepts.

    OWASP’s ranking for the major API security risks in 2023 has been published. The list includes many parallels with the 2019 list, some reorganizations/redefinitions, and some new concepts.

    Here are the OWASP Top 10 API Security Risks of 2023 and a comparison to the 2019 version:

    The top two remain almost identical: broken object level authentication (API1) and broken authentication (was broken user authentication; API2),
    API3 is now broken object property level authorization. It used to be excessive data exposure,
    API4 changes from lack of resources & rate limiting to unrestricted resource consumption,
    API5 remains the same: broken function level authorization,
    API6 changes from mass assignment to unrestricted access to sensitive business flows,
    API7 changes from security misconfiguration to server-side request forgery,
    API8 changes from injection to security misconfiguration (down one place from 2019),
    API9 changes from improper assets management to improper inventory management,
    API 10 changes from insufficient logging & monitoring to unsafe consumption of APIs.

    Neither threats nor risks change drastically over a few years; but they evolve, and our understanding of them equally evolves. This is demonstrated by the 2023 listing – not so much a new list but a refining of the existing list.

    OWASP Top 10 API Security Risks – 2023
    https://owasp.org/API-Security/editions/2023/en/0×11-t10/

    Reply
  32. Tomi Engdahl says:

    ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages
    https://www.securityweek.com/chatgpt-hallucinations-can-be-exploited-to-distribute-malicious-code-packages/

    Researchers show how ChatGPT/AI hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers.

    It’s possible for threat actors to manipulate artificial intelligence chatbots such as ChatGPT to help them distribute malicious code packages to software developers, according to vulnerability and risk management company Vulcan Cyber.

    The issue is related to hallucinations, which occur when AI, specifically a large language model (LLM) such as ChatGPT, generates factually incorrect or nonsensical information that may look plausible.

    In Vulcan’s analysis, the company’s researchers noticed that ChatGPT — possibly due to its use of older data for training — recommended code libraries that currently do not exist.

    The researchers warned that threat actors could collect the names of such non-existent packages and create malicious versions that developers could download based on ChatGPT’s recommendations.

    Specifically, Vulcan researchers analyzed popular questions on the Stack Overflow coding platform and asked ChatGPT those questions in the context of Python and Node.js.

    ChatGPT was asked more than 400 questions and roughly 100 of its responses included references to at least one Python or Node.js package that does not actually exist. In total, ChatGPT’s responses mentioned more than 150 non-existent packages.

    Reply
  33. Tomi Engdahl says:

    Apple hylännyt lähes miljoona mobiilisovellusta
    https://etn.fi/index.php/13-news/15058-apple-hylaennyt-laehes-miljoona-mobiilisovellusta

    AtlasVPN on kerännyt tietoja Applen sovelluskaupasta ja datan perusteella Apple on vuosien 2020-2022 aikana hylännyt lähes miljoona sovellusta App Storesta tietosuojaloukkausten vuoksi. Tarkka lukema on 958 000 sovellusta.

    Yksityisyyssyistä hylättyjen hakemusten määrä kasvaa räjähdysmäisesti. Vuonna 2020 lukea oli 215 000, mutta viime vuonna jo 400 000. Käyttäjien tietosuoja on suuri huolenaihe, koska on ollut tapauksia, joissa sovellukset keräävät enemmän tietoja kuin on tarpeen tai jakavat sen kolmansille osapuolille asiasta ilmoittamatta tai ilman käyttäjän suostumusta.

    https://atlasvpn.com/blog/nearly-1-000-000-apps-rejected-from-apple-app-store-for-privacy-violations

    Reply
  34. Tomi Engdahl says:

    Yksi ainoa kuva riittää: FBI:ltä vakava varoitus todella vasten­mielisestä ilmiöstä
    https://www.is.fi/digitoday/art-2000009640379.html

    Pahantekijät voivat helposti väärentää sinut seksuaalisiin akteihin ilman että voit estää tätä.

    Yhdysvaltain liittovaltion poliisi FBI varoittaa viattomien valokuvien ja videoiden käyttämisestä pornoon ja kiristykseen. Näistä tekoälyn avulla tuotetuista keinotekoisista valokuvista ja videoista käytetään nimitystä deepfake.

    Ilmiönä deepfake on jo vuosia vanha, mutta teknologian kehittyessä väärennöksistä on tullut entistä uskottavampia ja helpommin toteutettavia. Deepfakeja on käytetty myös muun muassa poliittiseen vaikuttamiseen. Aivan äsken esimerkiksi Venäjällä levisi presidentti Vladimir Putinista tehty deepfake.

    Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes
    https://www.ic3.gov/Media/Y2023/PSA230605

    The FBI is warning the public of malicious actors creating synthetic content (commonly referred to as “deepfakes”a) by manipulating benign photographs or videos to target victims. Technology advancements are continuously improving the quality, customizability, and accessibility of artificial intelligence (AI)-enabled content creation. The FBI continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content. The photos or videos are then publicly circulated on social media or pornographic websites, for the purpose of harassing victims or sextortion schemes.
    Explicit Content Creation

    Malicious actors use content manipulation technologies and services to exploit photos and videos—typically captured from an individual’s social media account, open internet, or requested from the victim—into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites. Many victims, which have included minors, are unaware their images were copied, manipulated, and circulated until it was brought to their attention by someone else. The photos are then sent directly to the victims by malicious actors for sextortion or harassment, or until it was self-discovered on the internet. Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet.

    Reply
  35. Tomi Engdahl says:

    Poliisi pyytää kansalaisia tarkkailemaan ”outoa, esimerkiksi sahaamista muistuttavaa ääntä”
    https://www.is.fi/autot/art-2000009642349.html

    Oikea tapa reagoida havaintoihin on soittaa hätäkeskukseen, joka osaa arvioida tilanteen ja hälyttää paikalle tarvittaessa myös poliisin.

    Katalysaattoreita viedään suoraan autojen alta tyypillisesti akkukäyttöisten puukkosahojen ja kulmahiomakoneiden avulla. Ilmiön syynä on katalysaattorien sisältämien jalometallien arvo pimeillä romumarkkinoilla.

    Reply
  36. Tomi Engdahl says:

    ChatGPT creates mutating malware that evades detection by EDR
    Mutating, or polymorphic, malware can be built using the ChatGPT API at runtime to effect advanced attacks that can evade endpoint detections and response (EDR) applications.

    ChatGPT creates mutating malware that evades detection by EDR
    https://www.csoonline.com/article/3698516/chatgpt-creates-mutating-malware-that-evades-detection-by-edr.html?s=04&fbclid=IwAR0QtwMq8Yj_1JiwJgEXN0XG7G2k2tTFZ_8KIADmNrsLVxEYgi6sr-RxLQM

    Mutating, or polymorphic, malware can be built using the ChatGPT API at runtime to effect advanced attacks that can evade endpoint detections and response (EDR) applications.

    Reply
  37. Tomi Engdahl says:

    Is the market oversaturated?
    https://www.facebook.com/groups/shahidzafar/permalink/6576310145721390/

    Even if it is, work hard and keep learning and surpass others.

    Sort of. It’s full of people at entry level with just credentials(degree and/or certs) and no experience. And it’s full of jobs that want everyone to have at least 4 years of experience for “entry level” positions. This has created a massive barrier for entry, and it’s only being made worse by both the companies and the academic institutions

    No, but there are a lot of barriers to entry.

    Recruiters/ managers etc claim it’s not . However I guess a lot of people are lacking the skills/experience so it’s hard to find good work

    It’s saturated with people with certs but can’t do the work.

    I get applications for people applying to be a pentester and want 130k with not even hobbyist experience.

    I get people with pentest+, CEH, and other certs but can’t pass the interview.

    No one wants to try for the practical skills they just want to read a book and pass a test.

    I’ve said it in many groups i’ll hire with no certs or degree if you have a competitive HTB account, blog, GitHub, or homelab that you can demonstrate your ability.

    Depends where, depends what field, depends on seniority.
    Smart person with broad IT knowledge and passion can be hired in cyber but also in many other jobs so learn – don’t think that some cert will give you a job

    Reply
  38. Tomi Engdahl says:

    ChatGPT creates mutating malware that evades detection by EDR
    https://www.csoonline.com/article/3698516/chatgpt-creates-mutating-malware-that-evades-detection-by-edr.html

    Mutating, or polymorphic, malware can be built using the ChatGPT API at runtime to effect advanced attacks that can evade endpoint detections and response (EDR) applications.

    Reply
  39. Tomi Engdahl says:

    To Improve Privacy, Apple to Strip Tracking Parameters From Shared URLs
    Apple will remove tracking data from links shared in Messages and Mail, which could annoy marketers who rely on ‘tracking parameters’ to see how their ad campaigns are performing.
    https://uk.pcmag.com/security/147264/to-improve-privacy-apple-to-strip-tracking-parameters-from-shared-urls

    Reply
  40. Tomi Engdahl says:

    How Attorneys Are Harming Cybersecurity Incident Response
    https://www.schneier.com/blog/archives/2023/06/how-attorneys-are-harming-cybersecurity-incident-response.html

    New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:

    Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop.

    So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.

    Reply
  41. Tomi Engdahl says:

    Feds’ case against Huawei in cell networks tracked ‘unprofitable’ deals near US military bases / Federal agents expressed concerns that Huawei could intercept military communications
    https://www.theverge.com/2022/7/24/23276222/huawei-telecom-cell-networks-unprofitable-deals-us-military-bases

    An FBI investigation into Huawei reveals the Chinese telecom company had a pattern of installing equipment on cell towers near military bases in rural America — even if it wasn’t profitable to do so, according to a report from CNN. The unearthed investigation sheds some light on the US government’s motive behind the stalled “rip and replace” program that pushes for the removal of Huawei’s tech throughout the country.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*