Cybersecurity – Launched Tech News https://tbtech.co The Latest On Tech News & Insights Mon, 18 Mar 2024 00:03:00 +0000 en-GB hourly 1 https://tbtech.co/wp-content/uploads/2024/02/cropped-Launched_Icon-32x32.png Cybersecurity – Launched Tech News https://tbtech.co 32 32 Experts gather at UK Cyber Week to address neurodiversity https://tbtech.co/news/experts-gather-at-uk-cyber-week-to-address-neurodiversity/?utm_source=rss&utm_medium=rss&utm_campaign=experts-gather-at-uk-cyber-week-to-address-neurodiversity https://tbtech.co/news/experts-gather-at-uk-cyber-week-to-address-neurodiversity/#respond Mon, 18 Mar 2024 00:03:00 +0000 https://tbtech.co/news/experts-gather-at-uk-cyber-week-to-address-neurodiversity/ Over 2500 industry leaders to attend Olympia event on 17-18 April to tackle diversity and wellness as the skills crisis in cyber worsens  

• 15-20% of the UK population are neurodivergent

• More than half of those who self-report as neurodivergent have not disclosed their condition, with 64% of employers still having ‘little’ or ‘no’ understanding of neurodiverse conditions

• 59% of cybersecurity teams are understaffed as the cyber skills gap grew by 13% from 2022 to 2023 

• Key speakers at UK Cyber Week include: Marcus Hutchins, the Security Researcher who stopped WannaCry; Danni Brooke, Intelligence Expert and star of Channel 4’s Hunted; Stephanie Itimi, CEO of SEIDEA; and Rik Ferguson, VP Security Intelligence at Forescout Technologies and Special Advisor for Europol.

18th March 2024, United Kingdom: Global cybersecurity and IT leaders are gathering at the free-to- attend UK Cyber Week Expo & Conference in Olympia, London, on April 17-18 to debate some of the most pressing and prevalent issues currently impacting the UK cyber sector. 

Now in its second year, UK Cyber Week has placed neurodiversity and wellness in the sector top of the agenda, with its ‘Inclusive Cyber Space’ area, aiming to increase conversation and drive action to close diversity and equality gaps. 

The sector continues to face a damaging skills crisis, which experts believe could be tackled by creating a more diverse workforce. A study published in October 2023 revealed that the skills gap grew by 13% from 2022, equating to roughly four million cybersecurity professionals missing from the global workforce. Another report published by ISACA recently found that 59% of cybersecurity teams are understaffed.

Throughout the two-day UK Cyber Week event, organised by ROAR B2B, attendees will hear from some of the most well-renowned names in the industry including Ben Owen, Intelligence Expert and star of Channel 4’s Hunted and The Boss; Geoff White, Tech and Crime Author and Podcaster; Danni Brooke, Intelligence Expert and star of Channel 4’s Hunted; Christine Bejerasco, CISO at WithSecure; Stephanie Itimi, CEO of SEIDEA; Rik Ferguson, VP Security Intelligence at Forescout Technologies and Special Advisor for Europol; and Marcus Hutchins, the Security Researcher who stopped one of the world’s largest ever cyber-attacks, WannaCry.

Keynote speaker Charlie Rossi, Winner of ‘Undergraduate of the Year: Celebrating Neurodiverse Excellence 2023’, will give industry leaders insight into the barriers facing some from entering the cyber workforce, with her presentation, ‘Demographic Data: Barriers of my Success.’ 

It’s estimated that around 15-20% of the UK population is neurodiverse. Qualities shared by many neurodiverse people, such as hyperfocus, high levels of concentration and persistence and attention to patterns and repetition, are advantageous to tasks related to cyber protection. 

Charlie Rossi said: “As a student with both ASD (Autistic Spectrum Disorder) and ADHD, there have definitely been barriers to inclusion. After being named the Undergraduate of the Year last year, I was able to spend 10 weeks working at Rolls-Royce as part of my internship, which inspired me to pursue a career in cyber. I want to share at UK Cyber Week why there’s no one way to be neurodivergent. I’ve been able to challenge stereotypes about autism, and to shift the narrative, focusing my application on the positive aspects of my neurodivergence – the unique skill set and perspective it gives me to work within cybersecurity. I look forward to sharing my experiences and hearing those of others attending UK Cyber Week.”

Other panel discussions and keynotes in The Inclusive Cyber Space include:

• Daniel Olsen: CyberPsychology: The Battle of Your Attention

• Steph Aldridge: NeuroCyber, the UK’s cyber neurodiversity network – Panel discussion around NeuroCyber NeuroUnity Enablers with Dr Ashley Sweetman, Holly Foxcroft, Ed Tucker, Jade Eskenzi

• Louise Batty: Stott & May – Menopause in Cyber

Purvi Kay, Head of Cyber Security Governance, Risk and Compliance, BAE Systems, will also be running a session on the importance of tackling the diversity of cyber threats with an equally diverse cyber team.

Purvi added: “The cybersecurity industry is constantly evolving with new threats coming to the fore all the time. To tackle these diverse threats, we need diverse teams – and not just diversity of background or heritage, but diversity of thought. This is what a diverse workforce can offer the industry as it faces an ever-more dangerous and complex threat landscape.”

“Highlighting the need for broader diversity in the industry and the benefits neurodiversity can offer is critical. In a recent study, 92% of organisations reported to have skills gaps, and promoting diversity, equality and inclusion initiatives is a valuable way to address this growing concern”, continued Purvi. 

Bradley Maule-ffinch, Group Managing Director of ROAR B2B, added: “This year’s UK Cyber Week looks to tackle some of the biggest challenges facing the cyber industry today. We want to encourage conversations between the sector’s leading voices to help address issues such as the lack of diversity in the workforce, and how the integration of neurodivergent professionals could help stem the growing cyber skills gap. 

“We’re delighted to be putting on an even better event than last year, with more visitors, more industry leaders, and more of a community atmosphere at a new bigger venue – Olympia London. We can’t wait for the doors to open and to welcome everyone in April!”

In partnership with ClubCISCO, ROAR B2B will also be launching a series of Quick-Fire Surveys. Consisting of 15 key questions, these surveys are designed to gain insights on the most topical issues driving the cybersecurity news agenda. 

UK Cyber Week is backed by government partners, including the National Cyber Crime Unit, the Department of Science, Innovation & Tech (DSIT), the National Crime Agency and the UK Defence & Security Exports (UKDSE). This year’s key exhibitors include Censornet, Culture AI, DigitalXRAID, Forescout Technologies, Sonicwall, and Threatlocker. 

To register for your free ticket to attend UK Cyber Week this year, visit the registration page here or to book a stand visit here. 

]]>
https://tbtech.co/news/experts-gather-at-uk-cyber-week-to-address-neurodiversity/feed/ 0
Importance of a Zero Trust Approach to GenAI https://tbtech.co/news/importance-of-a-zero-trust-approach-to-genai/?utm_source=rss&utm_medium=rss&utm_campaign=importance-of-a-zero-trust-approach-to-genai https://tbtech.co/news/importance-of-a-zero-trust-approach-to-genai/#respond Thu, 07 Mar 2024 11:43:05 +0000 https://tbtech.co/?p=255569 There is no doubt that generative AI continues to evolve rapidly in its ability to create increasingly sophisticated synthetic content. This has made the need to ensure trust and integrity vital. It is time for businesses, governments, and the industry to take a zero trust security approach, combining cybersecurity principles, authentication safeguards, and content policies to create responsible and secure generative AI systems. But what would Zero Trust Generative AI look like? Why is it required? How should it be implemented? And what are the main challenges the industry will have?

Never assume trust

With a Zero Trust model, trust is never assumed. Rather, it operates on the principle that rigorous verification is required to confirm each and every access attempt and transaction. Such as shift away from implicit trust is crucial in the new remote and cloud-based computing era in which we all live.

Today, generative AI is all around us and can be used to autonomously create new, original content like text, images, audio, and video based on its training data. Plus, this ability to synthesise novel, realistic artifacts has grown enormously with the algorithmic advances we have seen over the last 12 months.

A Zero Trust model would prepare generative AI models for emerging threats and vulnerabilities by weaving proactive security measures throughout their processes, from data pipelines to user interaction. This would provide multifaceted protection against misuse at a time when generative models are acquiring unprecedented creative capacity in the world today.

Ensuring vital safeguards

As generative AI models continue to increase in their sophistication and realism, so too does their potential for harm if misused or poorly designed. Vulnerabilities could enable bad actors to exploit them to spread misinformation, forge content designed to mislead, or produce dangerous material on a global scale.

Unfortunately, even those systems that are well-intentioned may struggle to fully avoid ingesting biases or falsehoods during data collection if we are not careful. Moreover, the authenticity and provenance of their strikingly realistic outputs can be challenging to verify without rigorous mechanisms.

A Zero Trust approach would provide vital safeguards by thoroughly validating system inputs, monitoring ongoing processes, inspecting outputs, and credentialing access through every stage to mitigate risks. This would, in turn, protect public trust and confidence in AI’s societal influence.

A framework for a Zero Trust approach

Constructing a Zero Trust framework for generative AI encompasses several practical actions across architectural design, data management, access controls and more. To ensure optimal security, key measures involve:

1. Authentication and authorisation: Verify all user identities unequivocally and restrict access permissions to only those required for each user’s authorised roles. Apply protocols like multi-factor authentication (MFA) universally.

2. Data source validation: Confirm integrity of all training data through detailed logging, auditing trails, verification frameworks, and oversight procedures. Continuously evaluate datasets for emerging issues.

3. Process monitoring: Actively monitor system processes using rules-based anomaly detection, machine learning models and other quality assurance tools for suspicious activity.

4. Output screening: Automatically inspect and flag outputs that violate defined ethics, compliance, or policy guardrails, facilitating human-in-the-loop review.

5. Activity audit: Rigorously log and audit all system activity end-to-end to maintain accountability. Support detailed tracing of generated content origins.

Securing the content layer holistically

While access controls provide the first line of defence in Zero Trust Generative AI, comprehensive content layer policies constitute the next crucial layer of protection and must not be overlooked. This expands to encompass what users can access, to what data the AI system itself can access, process, or disseminate irrespective of credentials. 

Key aspects of content layer security include defining content policies to restricting access to prohibited types of training data, sensitive personal information or topics posing heightened risks. It can also be used to implement strict access controls specifying which data categories each AI model component can access, then perform ongoing content compliance checks using automated tools plus human-in-the-loop auditing to catch policy and regulatory compliance violations. Finally, content layer security can be used to maintain clear audit trails for high fidelity tracing of the origins, transformations and uses of data flowing through generative AI architectures. This holistic content layer oversight further cements comprehensive protection and accountability throughout generative AI systems.

Challenges to overcome

While crucial for responsible AI development and building public trust, putting Zero Trust Generative AI into practice does, unfortunately, face a number of challenges. On the technical side, rigorously implementing layered security controls across sprawling machine learning pipelines without degrading model performance will undoubtably be non-trivial for engineers and researchers. Additionally, balancing powerful content security, authentication and monitoring measures while retaining the flexibility for ongoing innovation will represent a delicate trade-off that will require care and deliberation when crafting policies or risk models. After all, overly stringent approaches would only constrain the benefit of the technology.

Further challenges will relate to ensuring content policies are at the right level and unbiased. 

Safeguarding the future

In an era where machine-generated media holds increasing influence over how we communicate, live, and learn, ensuring accountability will be paramount. Holistically integrating Zero Trust security spanning authentication, authorisation, data validation, process oversight and output controls will be vital to ensure such systems are safeguarded as much as possible against misuse. 

Yet, to safeguard the future will require sustained effort and collaboration across technology pioneers, lawmakers, and society. By utilising a Private Content Network, organisations can do their bit by effectively managing their sensitive content communications, privacy, and compliance risks. A Private Content Network can provide content-defined zero trust controls, featuring least-privilege access defined at the content layer and next-gen DRM capabilities that block downloads from AI ingestion. This will help ensure that Generative AI can flourish in step with human values.

]]>
https://tbtech.co/news/importance-of-a-zero-trust-approach-to-genai/feed/ 0
Managing Private Content Exposure Risk in 2024 https://tbtech.co/news/managing-private-content-exposure-risk-in-2024/?utm_source=rss&utm_medium=rss&utm_campaign=managing-private-content-exposure-risk-in-2024 https://tbtech.co/news/managing-private-content-exposure-risk-in-2024/#respond Wed, 31 Jan 2024 12:01:00 +0000 https://tbtech.co/news/managing-private-content-exposure-risk-in-2024/ Managing your data privacy and compliance risks becomes increasingly more difficult by the year. Cybercriminals continue to evolve their strategies and approaches, making it more difficult to identify, stop, and mitigate the damages of malicious attacks. Recognising they can breach hundreds, or even thousands, of companies and millions of records with one successful attack, many rogue nation-states and cybercriminals have turned to the supply chain, a trend we believe will increase in 2024. Third-party vendors, including technology providers, represented 15% of all successful data breaches last year. And as generative artificial intelligence (GenAI) large language models (LLMs) take the digital landscape by storm, tracking and controlling our sensitive content became even harder. 

In response, regulatory bodies are evolving their existing data privacy regulations and adding new ones. They also ratcheted up fines and penalties targeting regulatory violations. This “reactionary movement” will not slow down but continue to pick up pace in the coming year. All of this means organisations must track and control content access and generate more audit log reports to demonstrate compliance with relevant compliance requirements.

Big isn’t always better

The number of employees and third parties using generative artificial intelligence (GenAI) large language models (LLMs) will increase in 2024 as the competitive advantages become too significant to ignore. This will expand the threat surface and the potential for sensitive content to be inadvertently or intentionally exposed.

Even with advances in security controls, high-profile data breaches stemming from GenAI LLM misuse are likely. This will force data security to be a central part of GenAI LLM strategies. Organisations slow to adapt will face brand reputation damage, lost revenue opportunities, potential regulatory fines and penalties, and ongoing litigation costs. 

A need for MFT to grow up

Managed file transfer (MFT) tools are used for the digital transfer of data in an automated, reliable, and secure manner using governance tracking and controls for regulatory compliance. However, many are based on decades-old technology. Due to this we have witnessed a spiralling escalation of cyberattacks on them by rogue nation-states and cybercriminals. 

Two major MFT tools experienced zero-day exploits in 2023. In both instances, multiple zero-day vulnerabilities were targeted. If the two MFT attacks in 2023 are any indication, cybercriminals will continue to exploit zero-day vulnerabilities in legacy MFT solutions in 2024. 

Email continues to be targeted

In the past year, malware attacks instigated through email shot up 29%, phishing attacks grew 29% and business email compromise (BEC) increased by 66%. Because of this, more than eight in ten data breaches now target humans as their first line of access using social engineering strategies.  

Unfortunately, legacy email systems lack the requisite security capabilities. Until organisations embrace an email protection gateway where email is sent, received, and stored using zero-trust policy management with single-tenant hosting, email security will remain a serious risk factor. 

Shifting standards

Regulatory bodies will continue evolving data privacy regulations in 2024. They will also likely ratchet up fines too. Recent major fines, like those against Marriott and British Airways, were in large part due to lapses in data security. This precedent indicates regulators will come down hard any organisation that negligently expose personal data. This means businesses will, more than ever, need to track and control content access and generate audit log reports to demonstrate compliance. 

Gartner predicts that personal data for three-quarters of the world’s population will be covered by data privacy regulations by the end of 2024, and the average annual budget for privacy in a company will exceed $2.5 million. 

The need for more stringent data sovereignty

Data sovereignty will be an increasing challenge for organisations in 2024. New privacy laws often require organisations to control the country where data resides. This can be a significant challenge for multinational businesses. Yet at the same time, data democratisation – the practice of making data accessible and consumable for everyone in an enterprise regardless of technical skill – is a trend that will impact data sovereignty. 

The good news is that data sovereignty inherently empowers organisations to maintain compliance with local and international data regulations. This minimises legal risks, establishes a reputation for responsible data handling, and helps companies avoid hefty fines. By prioritising data sovereignty, organisations will be able to build trust with customers and stakeholders alike.

Move towards DRM to protect sensitive content

As files grow every larger, having robust solutions for secure handling and storage of them become ever more important.

Digital rights management (DRM) adoption will clearly accelerate in 2024 as organisations aim to protect sensitive content and comply with expanding regulations. Data classification and DRM policy management will drive organisations to institute data protection using least-privilege access and watermarks for low-risk data, view-only DRM for moderate-risk data, to safe video-streamed editing that blocks downloads and copy and paste for high-risk data. Highly regulated industry sectors such as healthcare and finance will be the biggest adopters. 

Businesses need to hit the reset button

In 2024, businesses will be under heightened strain to protect confidential data amidst escalating cyber threats and to ensure adherence to burgeoning international regulatory standards. It is time for organisations to look at alternatives. 

The landscape of sensitive content communication has changed and will continue to do so over the next 12 months. Yet, by adopting zero-trust architectures, detailed security models based on content, strong access management, integrated DRM, DLP, and other leading-edge security measures, organisations large and small can mitigate risks and uphold compliance. It is time for organisations should hit reset on their sensitive content communication strategies and work to ensure they have the right technologies in place to protect all their file and email data communications.

]]>
https://tbtech.co/news/managing-private-content-exposure-risk-in-2024/feed/ 0
Featurespace: Fraud in Financial Institutions increased by 70% https://tbtech.co/news/featurespace-fraud-in-financial-institutions-increased-by-70/?utm_source=rss&utm_medium=rss&utm_campaign=featurespace-fraud-in-financial-institutions-increased-by-70 https://tbtech.co/news/featurespace-fraud-in-financial-institutions-increased-by-70/#respond Mon, 11 Dec 2023 17:12:00 +0000 https://tbtech.co/news/featurespace-fraud-in-financial-institutions-increased-by-70/ Featurespace, the world leader in enterprise-grade fraud and financial crime technology, revealed the data findings of a new report: The State of Fraud and Financial Crime in North America Annual Report 2023.

 

The research, developed in conjunction with GlobalData, shows that overall North American fraud rates in 2023 – which includes both successful and unsuccessful incidents of fraud and financial crime – increased by 70%, compared to 59% in 2022, as reported by respondents.

 

Commenting on the report, Martina King, CEO, Featurespace, says: “These findings emphasize the need for continued vigilance. The fact that fraud is considered commonplace points to the real challenges in our sector. We need to build a future together where the fraudsters are two steps behind the financial institutions – instead of two steps ahead.”

 

Focusing on payment methods

Financial Institutions were asked in both 2022 and 2023 whether they had observed an increase in fraud rates across a range of twenty different payment methods. This included traditional methods, such as credit cards, and newer ones, such as digital wallet payments.

The latest results show several striking changes in payment methods which have attracted the greatest level of growth in fraud-related activity. Credit cards continue to top the list with the 64% of respondents reporting growth in fraud rates in 2022, increasing to 85% in 2023, an uplift of 22-percentage points.

Check payment fraud, which saw a 38% increase in 2022, has now skyrocketed to a remarkable 70% in North America in 2023. This represents a substantial 32-percentage point increase, positioning it as the second-highest growing method year-on-year, second only to credit cards. Indeed, the share of fraudulent transactions tied to physical forgery fraud or counterfeit activity has doubled, now comprising 14%. This is likely an outcome of fraud associated with check payments, as checks continue to be widely used by genuine customers and fraudsters.

In contrast, digital wallets, for which 58% of FIs reported an increase in fraud in 2022, has decreased to 22% in the most recent survey. Notably, the percentage of FIs reporting increased fraud rates for Apple Pay, PayPal and Google Pay has also decreased quite steeply – although in each case this growth remains in double digits. 

Finally, two of the largest changes between the 2022 and 2023 surveys relate to PayPal owned digital wallet Venmo, and BNPL payments. Both dropped from over 30% in 2022 to 5% or less in 2023.

Increasing “success” but also better blocking tactics

Featurespace’s findings show that the percentage of FIs reporting an increase in successfully executed fraudulent transactions grew from 62% in 2022 to 71% in 2023. However, those reporting growth in “false positive rates”- non-fraudulent transactions that are blocked by the organization – increased by 20-percentage points in the same period, reaching 63% in 2023, a finding which implies that, in response to the mounting challenge of fraud, FIs are resorting to imposing stricter controls on all their customers.

Indeed, FIs are employing a range of strategies to help combat the increasing levels of fraud faced by their organizations. Survey respondents were presented with a range of six options that can be adopted in fraud prevention, detection, and mitigation activities, with the majority (71%) saying that they used two or three different measures, while 28% used three or more. Just one percent indicated they used only one. The most-commonly selected measure was “rules-based algorithm” (72%), closely followed by “fraud prevention application programming interfaces (APIs)” at 71%.

Financial institutions with a more comprehensive range of measures were also more inclined to report a reduction in fraud losses over the past year, with FIs that reported lower losses from fraud having three or more measures in place, while those reporting increased fraud losses having fewer than three.

Emerging technologies and new solutions are being embraced

FIs are increasingly receptive to the urgent adoption of new solutions in 2023, with research finding that the proportion of responders requiring a hard “proof of concept” before adopting new solutions has dropped from 38% in 2022 to just 8% in 2023.

Recognizing the need to embrace emerging technologies, 98% of respondents acknowledge the need for Generative AI as a solution for combating fraud and financial crime. While a small proportion of respondents claimed to be actively using Generative AI at present, nearly all expressed a willingness to embrace these technologies. Only 2% of respondents saw no need to engage with Generative AI and emerging technologies.

Commenting on the use of new technology to battle fraud, David Sutton, Featurespace’s Chief Innovation Officer, adds: “Smarter technology helps financial institutions better understand their consumers. We have pushed this to the next level with the recent announcement of TallierLTM™, the world’s first Large Transaction Model. This pairs cutting-edge generative AI algorithms with huge volumes of transaction data, enabling a machine to efficiently comprehend the meaning and relationships between different customer transactions.”

Examining fraud types

In addition to looking at the impact of fraud on different payment mechanisms, FIs were also asked to provide an overview of the volume and value of fraudulent transactions as defined by the Federal Reserve’s FraudClassifier model. 

 

When comparing the change in distribution of fraudulent incidents between 2022 and 2023, the overall proportion of “unauthorized” incidents dropped slightly, from 49% in 2022 to 46% in 2023, meaning “authorized” incidents increased over the same time.

Examining these further, within the “unauthorized” category, “Forgery and counterfeit” activity doubled its share (to 14%), while “Digital payment” dropped by five-percentage points: the two “account takeover” fraud types each dropped by two-percentage points.

For the “authorized party” category, the two types to register the greatest changes were “false claim”, which increased by five-percentage points and “relationship or trust fraud” which decreased by seven-percentage points.

Commenting on Featurespace’s latest fraud research, Duncan Sandys, CEO of P20, the voice of the global payments industry, adds: “The rate at which fraudsters are accelerating can feel daunting, but that is why we need to be equally focused on acceleration: whether through information sharing, technological innovations, or fresh solutions within our existing systems.”

]]>
https://tbtech.co/news/featurespace-fraud-in-financial-institutions-increased-by-70/feed/ 0
Unlocking productivity and efficiency gains with data management https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management/?utm_source=rss&utm_medium=rss&utm_campaign=unlocking-productivity-and-efficiency-gains-with-data-management https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management/#respond Tue, 04 Jul 2023 17:02:23 +0000 http://52.56.93.237?p=254704 Enterprise data has been closely linked with hardware for numerous years, but an exciting transformation is underway. Data stewards in larger corporations have long been obliged to concentrate on acquiring, overseeing, and upholding data storage infrastructure with hardware. Additionally, they were periodically required to purchase the newest equipment from vendors and to transfer their data to the most up-to-date gear to reap the benefits of the latest developments in terms of efficiency and security

Now, the era of the hardware businesses is gone, as modern data storage and protection capabilities, powered by the cloud, have rendered much of the once-crucial storage legacy technology obsolete. With advanced data services available through the cloud, organisations can forego investing in hardware and abandon infrastructure management in favour of data management. This change is widely recognised, with Gartner Research VP Julia Palmer, an expert on emerging infrastructure technologies and strategies, highlighting the shift in an October 2022 report.

Once your data is no longer tied to a specific facility, location, or hardware, new opportunities arise for leveraging it within your organisation. However, to do so, you must first shift your strategic perspectives on data management and delivery, focusing on these three key rules or requirements: Utilising the cloud for more flexibility and scalability, making data delivery a priority and focusing on securing data. Let’s explore these in more detail:

1. The time is now to transfer data to the cloud

The advantages of shifting your data to the cloud have been apparent for quite some time, as the economical benefits and infinite scalability of object storage have solidified cloud services as the infrastructure of the future. The majority of data storage is now done in the cloud, with over 50% of company data on the cloud, and the pandemic has only increased the urgency to adopt cloud services. 

Utilising cloud services is no longer simply about cutting long-term expenses, minimising physical infrastructure, and enhancing demand scalability; it also enables more agility for your business and transforms the possibilities of data usage.

2. Prioritising data delivery is key to productivity and efficiency

The shift towards modern infrastructure has been ongoing for a while, but the emergence of remote and hybrid work has accelerated this change. Previously, users were stationed at desks near the hardware that stored and protected their data, but now they are spread out everywhere, working from home offices, cafes, client offices, co-working spaces, and more. Users don’t stay put, either, shifting from location to location, and they expect to be able quickly and easily access their data regardless of where they happen to be working.

This transformation in how we work means that applications must be running close to workers’ data, as regardless of industry, where a worker is located, or if they’re using a general or homegrown application, to ensure efficiency and productivity, apps must be close to data to deliver the expected level of performance. Traditional storage hardware and wide area networks are insufficient for this task because the software needs to reach across the wire to access that data. This is where the cloud has become a crucial delivery vehicle for data. Cloud computing allows for increased flexibility and the ability to deliver data to users and applications anywhere in the world.

3. Never compromise on data protection

Last but not least, data protection is crucial and data delivery should not be at the expense of it. Even before the shift to hybrid and remote working, which accelerated during the pandemic, ransomware was a growing threat. The UK government released new estimates in April 2023 that suggested there were around 2.39 million instances of cyber crime across all businesses, with 11% of organisations experiencing cyber crime in the last 12 months. And now there are even more chances for malicious hackers due to the expanded attack surface. This is as more people are retrieving data and systems from various locations, so it is imperative to focus on protecting data while contemplating how to support the flexibility of hybrid and remote work models.

Ignoring one and focusing on the other is not an option. For example, keeping employees in a few major locations for data protection will restrict productivity and harm your talent pool. Conversely, distributing data everywhere without a reliable ransomware recovery plan will put your business at risk of extended downtime or financial exposure. It’s become clear that a comprehensive approach to data protection is critical for businesses to ensure both business efficiency and security globally.

Reaping the benefits from a shift to data management

Even with the underlying risk of ransomware, this transition from managing infrastructure to managing data aligns perfectly with the new flexible way of working. Users can be in the office one day, then at home the next, and collaborating with colleagues, partners and others potentially all over the world. Data centres no longer need to be the centre of data, as data itself is now the centre.

A new approach to enterprise data is now a requirement for businesses, with shifting to the cloud, prioritising data delivery, and honing in on data protection key to successfully transitioning from managing infrastructure to managing data. Embracing this new methodology could also spark larger changes with exciting implications for enterprises as they choose what to do with this newly accessible data. For example, feeding it into new machine learning and artificial intelligence workloads to further drive innovation, workplace productivity and efficiency. 

]]>
https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management/feed/ 0
Unlocking productivity and efficiency gains with data management https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/?utm_source=rss&utm_medium=rss&utm_campaign=unlocking-productivity-and-efficiency-gains-with-data-management-2 https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/#respond Tue, 20 Jun 2023 09:06:00 +0000 http://52.56.93.237/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/ Enterprise data has been closely linked with hardware for numerous years, but an exciting transformation is underway. Data stewards in larger corporations have long been obliged to concentrate on acquiring, overseeing, and upholding data storage infrastructure with hardware. Additionally, they were periodically required to purchase the newest equipment from vendors and to transfer their data to the most up-to-date gear to reap the benefits of the latest developments in terms of efficiency and security

Now, the era of the hardware businesses is gone, as modern data storage and protection capabilities, powered by the cloud, have rendered much of the once-crucial storage legacy technology obsolete. With advanced data services available through the cloud, organisations can forego investing in hardware and abandon infrastructure management in favour of data management. This change is widely recognised, with Gartner Research VP Julia Palmer, an expert on emerging infrastructure technologies and strategies, highlighting the shift in an October 2022 report.

Once your data is no longer tied to a specific facility, location, or hardware, new opportunities arise for leveraging it within your organisation. However, to do so, you must first shift your strategic perspectives on data management and delivery, focusing on these three key rules or requirements: Utilising the cloud for more flexibility and scalability, making data delivery a priority and focusing on securing data. Let’s explore these in more detail:

1. The time is now to transfer data to the cloud

The advantages of shifting your data to the cloud have been apparent for quite some time, as the economical benefits and infinite scalability of object storage have solidified cloud services as the infrastructure of the future. The majority of data storage is now done in the cloud, with over 50% of company data on the cloud, and the pandemic has only increased the urgency to adopt cloud services. 

Utilising cloud services is no longer simply about cutting long-term expenses, minimising physical infrastructure, and enhancing demand scalability; it also enables more agility for your business and transforms the possibilities of data usage.

2. Prioritising data delivery is key to productivity and efficiency

The shift towards modern infrastructure has been ongoing for a while, but the emergence of remote and hybrid work has accelerated this change. Previously, users were stationed at desks near the hardware that stored and protected their data, but now they are spread out everywhere, working from home offices, cafes, client offices, co-working spaces, and more. Users don’t stay put, either, shifting from location to location, and they expect to be able quickly and easily access their data regardless of where they happen to be working.

This transformation in how we work means that applications must be running close to workers’ data, as regardless of industry, where a worker is located, or if they’re using a general or homegrown application, to ensure efficiency and productivity, apps must be close to data to deliver the expected level of performance. Traditional storage hardware and wide area networks are insufficient for this task because the software needs to reach across the wire to access that data. This is where the cloud has become a crucial delivery vehicle for data. Cloud computing allows for increased flexibility and the ability to deliver data to users and applications anywhere in the world.

3. Never compromise on data protection

Last but not least, data protection is crucial and data delivery should not be at the expense of it. Even before the shift to hybrid and remote working, which accelerated during the pandemic, ransomware was a growing threat. The UK government released new estimates in April 2023 that suggested there were around 2.39 million instances of cyber crime across all businesses, with 11% of organisations experiencing cyber crime in the last 12 months. And now there are even more chances for malicious hackers due to the expanded attack surface. This is as more people are retrieving data and systems from various locations, so it is imperative to focus on protecting data while contemplating how to support the flexibility of hybrid and remote work models.

Ignoring one and focusing on the other is not an option. For example, keeping employees in a few major locations for data protection will restrict productivity and harm your talent pool. Conversely, distributing data everywhere without a reliable ransomware recovery plan will put your business at risk of extended downtime or financial exposure. It’s become clear that a comprehensive approach to data protection is critical for businesses to ensure both business efficiency and security globally.

Reaping the benefits from a shift to data management

Even with the underlying risk of ransomware, this transition from managing infrastructure to managing data aligns perfectly with the new flexible way of working. Users can be in the office one day, then at home the next, and collaborating with colleagues, partners and others potentially all over the world. Data centres no longer need to be the centre of data, as data itself is now the centre.

A new approach to enterprise data is now a requirement for businesses, with shifting to the cloud, prioritising data delivery, and honing in on data protection key to successfully transitioning from managing infrastructure to managing data. Embracing this new methodology could also spark larger changes with exciting implications for enterprises as they choose what to do with this newly accessible data. For example, feeding it into new machine learning and artificial intelligence workloads to further drive innovation, workplace productivity and efficiency. 

]]>
https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/feed/ 0
Critical capabilities of the modern SIEM https://tbtech.co/news/critical-capabilities-of-the-modern-siem/?utm_source=rss&utm_medium=rss&utm_campaign=critical-capabilities-of-the-modern-siem https://tbtech.co/news/critical-capabilities-of-the-modern-siem/#respond Thu, 15 Jun 2023 09:06:00 +0000 http://52.56.93.237news/critical-capabilities-of-the-modern-siem/ Modern SIEMs provide extensive machine learning and anomaly detection capabilities for advanced threat detection. This ultimately can assist your security team to increase their effectiveness and reduce the resources required to run security operations – which is important in a time when there is a shortage of security skills and an ever-increasing number of alerts. 

In a nutshell, SIEM allows IT teams to see the bigger picture by collecting security event data from enterprise applications, the cloud and core infrastructure to learn exactly what goes on within the enterprise – creating value from the sum of data which is worth much more than the individual pieces. A single alert from an antivirus filter may not be a cause of panic on its own, but if it correlates with other anomalies, e.g. from the firewall at the same time, this could signify that a severe breach is in progress. 

Legacy vs modern SIEMs

Legacy SIEM solutions do not compare to those offered today. Since the amount of data both produced and collected by organizations has skyrocketed over the past few years, organizations need big data architectures that are flexible and scalable, so they can adapt and grow as the business changes over time. With the ability to handle large and complex implementations, today’s modern SIEM solutions can be deployed in either physical or virtual environments and on premise or in the cloud. Some SIEMs provide a very short implementation time and low maintenance resource requirements, resulting in the SIEM providing value within a matter of days.

SIEM tools must be able to ingest data from all sources – including cloud and on-premise log data – in real time to effectively monitor, detect and respond to potential threats. Modern SIEM solutions do not just have the ability to ingest and analyze more data, they thrive on it. The more data an organization can provide its SIEM, the more visibility analysts will have into the activities and the more effective they will be in detecting and responding to threats.

The modern SIEM provides a host of benefits, such as better threat detection and response. As cyber threats continue to expand and increase, businesses that can complete analysis of security events quickly and more accurately, will have a competitive advantage. A modern SIEM solution provides real-time data analysis, early detection of data breaches, data collection, secure data storage and accurate data reporting to improve threat detection and response times. 

Modern SIEM solutions go beyond basic security monitoring and reporting, they provide analysts with the clarity they need to improve decision-making and response times. With new ways of visualizing data to help analysts better interpret and respond to what that data is telling them, incident response and management becomes more sophisticated. Better analytics means teams can more accurately manage incidents and improve their forensic investigations – all within a single interface.

Automation and machine learning

Today’s IT teams are increasingly resource and time constrained. Enhanced automation frees security analysts from time consuming manual tasks and enables them to better orchestrate responses to threats. The best modern SIEM solutions utilize machine learning and user and entity behavior analytics (UEBA) to help ease the burden of overworked security analysts by automating threat detection, providing enhanced context and situational awareness, and utilizing user behavior to gain better insights. 

Moreover, UEBA enables better detection and response. Attackers often rely on compromised credentials or coercing users into performing actions that damage their own organization’s activity. To identify these types of attacks more quickly and accurately, UEBA can be used to monitor both suspicious user behavior and activities stemming from the cloud, mobile, on-premises applications, endpoints and networks, as well as external threats.

With UEBA, organizations will see a dramatic increase in their SIEM’s ability to track and identify threats. In addition, UEBA eliminates false positives so analysts have greater situational awareness before, during and after a threat occurs – meaning they are more effective and can spend their limited time on threats that will actually have an impact on operations.

Cost control

A modern SIEM solution that has a simple and predictable licensing model also enables businesses to spend less to keep their data secure, regardless of the amount of data they have and the number of sources from which data is logged. SIEM pricing models that are based on data usage are outdated. Data volumes are constantly increasing, and organizations should not be punished for that. 

Modern SIEM pricing models should instead be based on the number of devices sending logs or entities, meaning organizations will not have to worry that their data usage is affecting the cost, but can instead focus on scaling for future business needs. Make sure you analyze the total cost of ownership, also for when the SIEM needs to scale – some vendors have added cost when increasing hardware capabilities or the number of employees that needs access to the SIEM.

Another cost consideration is meeting compliance requirements, although fines, legal fees and damaged reputations can be even more costly. SIEM solutions can automate data collection, store event logs, improve threat identification and reporting, restrict data access, and flag policy and compliance violations to ensure businesses meet their compliance requirements. 

These are undisputed gains but according to Gartner, there are three main areas where a modern SIEM solution should excel: 

1. Advanced threat detection

With a modern SIEM tool, advanced threat detection can be executed in real time, allowing organizations to analyze and report on trends as well as user and entity behavior. With advanced analytics, organizations are empowered to monitor data access, application activity and can proactively detect and control advanced persistent threats (APT).

Threat detection capabilities include enrichment with internal or external contextual information, such as threat intelligence, user names or temporal knowledge. This enables security analysts to operate faster and more efficiently. Organizations should invest in SIEM solutions that provide access to effective ad-hoc queries, machine learning and UEBA capabilities, which will result in more effective and efficient threat hunting.

2. Security Monitoring

SIEM is an effective log management tool, allowing for basic security monitoring and is often used for compliance reporting and real-time monitoring of security controls. SIEM solutions should meet basic threat detection, compliance auditing and reporting requirements. With flexible and convenient collection and storage of logs, auditors’ needs can be accommodated, making compliance much easier.

Popular use cases among customers for basic security monitoring cover a broad range of security sources, including: Perimeter and network devices; endpoint agents; critical applications; and other infrastructure components.

3. Investigation and incident response

Visualization is very important for making sense of your data. A modern SIEM can give you the clarity you need, providing new ways to visualize data that make it easy to interpret and respond to what the data is telling you.

Incident response and management should be easy, fast and actionable, making it convenient to manage incidents within your team and enabling effective forensic investigations. If not within the tool itself, it is important to have world-class integration options to dedicated tools both within and outside of SOAR. With business context, security intelligence, user monitoring, data monitoring and application monitoring – all within a single interface – analysts will be more effective and informed.

Implementing a modern SIEM solution or upgrading an existing SIEM to one that offers analytics and machine learning capabilities will allow organizations to keep up with today’s expanding threat landscape – without the growing costs associated with highly-skilled security analysts and having to deal with outdated log volume and pricing models. Remember also that replacing a SIEM does not necessarily mean that your current investment is lost – some SIEM vendors will help you with a seamless transition to make sure full value is captured and transferred.

]]>
https://tbtech.co/news/critical-capabilities-of-the-modern-siem/feed/ 0
API attacks – why they’re particularly challenging for telecoms https://tbtech.co/news/api-attacks-why-theyre-particularly-challenging-for-telecoms/?utm_source=rss&utm_medium=rss&utm_campaign=api-attacks-why-theyre-particularly-challenging-for-telecoms https://tbtech.co/news/api-attacks-why-theyre-particularly-challenging-for-telecoms/#respond Mon, 22 May 2023 12:05:00 +0000 http://52.56.93.237news/api-attacks-why-theyre-particularly-challenging-for-telecoms/ A barrage of criticism has been levied at telecom providers recently regarding the alleged simplicity of the attacks carried out against Application Programming Interfaces (APIs). But telecoms face some unique challenges when it comes to securing their APIs as they service thousands of applications over an ever-growing base of devices as the IoT expand. In addition, many are saddled with a sprawling IT estate and inherited systems, including older legacy APIs, that is then further disrupted by subscriber growth and M&A activity. Consequently, a disproportionate amount of the telecom ecosystem runs on APIs compared to other businesses.

The recent headline-grabbing API attacks have also been more sophisticated than first meets the eye. Attackers are now combining multiple attack methods from the menu of the OWASP API Top Ten, with API2 (Broken User Authentication) being combined with API3 (Excessive Data Exposure) and API9 (Improper Assets Management). Telcos are not alone in this regard, with the first half of 2022 showing a marked trend in attacks against undocumented or ‘shadow’ APIs and tactical assaults that use multiple OWASP threats or API10+ as covered in the API Protection Report. 

In one of the attacks carried out this year, the abused APIs were meant for a development testing environment but were inadvertently publicly exposed. During the reconnaissance of the network, the attackers determined that the APIs were unauthenticated and unauthorised, responding with customer data from a live database. Checks on the key input parameter which give a key identifier for each user also weren’t in place which made it a trivial exercise to then launch a bot attack to iterate through different user IDs before returning the customer data. 

A wake-up call

It’s important to note that this type of attack can be lightning quick, taking just minutes to execute if there is insufficient security in place. As a result, operators worldwide are now realising that they need to discover the APIs on their network, put in place detection to identify when and how an attack is manifesting and defend their APIs. 

In another example, a large mobile operator with over 100 million subscribers recently carried out a discovery exercise to verify the APIs over its infrastructure. It found over a thousand unprotected API servers, equivalent to 18 percent of its server base, were publicly exposed. Over 30 percent of its API servers had SSL issues such as invalid or expired certificates which would have made it possible for an attacker to launch a Man-in-the-Middle attack. 

The telco also discovered over 107 files that could expose private keys which, in the wrong hands, could have been used to access mission critical business information. Plus, despite an earlier rigorous patching campaign to address the threat posed by the Log4Shell vulnerability, five APIs were discovered that remained vulnerable to this issue.

Yet another attack against one of the largest telecom providers in the world saw the number one OWASP exploit – Broken Object Level Authorisation – used to enumerate access to the API to carry out an account takeover (ATO) attack with the end goal of SIM swapping.

The criminals sought to obtain information about a customer account by enumerating whether cell phone numbers could be transferred to the provider’s network. Once the attacker discovered transferable phone numbers, they attempted to impersonate the account owner to manipulate an employee into transferring the cell number onto a SIM card in the attacker’s possession. From this point, the attacker could then take control of the victim’s sensitive accounts by completing SMS-based 2FA.

Detecting such attacks is difficult because it sees traffic rotated across multiple IP ranges from known malicious Bulletproof Proxies. However, there are tell-tale giveaways. The timing of the request was too perfect, with attackers initiating a single API endpoint request per IP at two second intervals, resulting in nine million malicious requests, and there was a high IP to request ratio, as each phone number had been attempted from over 200 IP addresses. Because the telco was able to spot this suspicious activity, the attack was successfully blocked during the ten hours it persisted.

Improving API security

With attackers using a wider variety of tactics, techniques, and procedures (TTPs), the onus is now on telecom providers. But if, contrary to opinion, these attacks are sophisticated and the telecom estate is difficult to secure, what can be done to secure these APIs?

Firstly, discover what you’re dealing with by documenting your APIs and use a runtime inventory to ensure all APIs are known. If APIs are spun-up and forgotten the attacker can analyse them at their leisure and undetected.

Secondly, it’s important to note that even perfectly coded APIs can still be vulnerable to abuse, so while incorporating security testing into the development process (aka ‘shift left’) is key, all APIs should be regarded as vulnerable. This is because business logic abuse, whereby the attacker probes the API and uses legitimate requests to gain access, can be used against these APIs. 

For this reason, it’s vital to know what ‘normal’ behaviour looks like for that API and to monitor for any deviation from this through traffic analysis. Many of the existing API security mechanisms in place, such as WAFs or API Gateways, rely upon signature-based detection when what is needed is behaviour-based analysis that can spot suspicious requests.

The final piece of the puzzle is defence. Being able to look at your API infrastructure through the eyes of an attacker can enable the business to identify its publicly exposed APIs and where they are hosted, carry out critical patching, and continuously monitor them. Should an attack occur, the incident can be dealt with in numerous ways, from real-time blocking to employing deception and deflection techniques to frustrate the attacker to cause them to abort the attack.

For instance, in the case of the multi-faceted API10+ attack mentioned above, the discovery phase would reveal any shadow or publicly exposed APIs, while detection would determine if authentication was in place and whether the APIs were transacting any sensitive data in their response. Finally, the defence element would act upon the knowledge that there was an attacker perpetrating a bot attack and would natively block the traffic and deflect the attacker.

Having in place a security strategy that covers the entire API lifecycle and seeks to discover, detect, and defend against these attacks is the only way that telecom providers can hope to counter these attacks, particularly given the complexity of their API estate and service offerings.

  

]]>
https://tbtech.co/news/api-attacks-why-theyre-particularly-challenging-for-telecoms/feed/ 0
Shadow API usage surges 900% https://tbtech.co/news/shadow-api-usage-surges-900/?utm_source=rss&utm_medium=rss&utm_campaign=shadow-api-usage-surges-900 https://tbtech.co/news/shadow-api-usage-surges-900/#respond Mon, 22 May 2023 12:05:00 +0000 http://52.56.93.237news/shadow-api-usage-surges-900/ Cequence Security, the leading provider of Unified API Protection (UAP), today released its second half 2022 report titled, “API Protection Report: Holiday Build-up Shows 550% Jump in Unique Threats.” Developed by the CQ Prime Threat Research Team, the report is based on the analysis of approximately one trillion API transactions spanning various industries over the second half of 2022 and seeks to highlight the latest API threat trends plaguing organisations today.

 

As compared to other reports based on survey and qualitative data, this threat report covers actual tactics, techniques, and procedures (TTPs) employed by threat actors targeting consumer-facing, business-to-business (B2B), and machine-to-machine APIs. It serves as a critical resource for decision-makers, security professionals, and other stakeholders tasked with safeguarding their organisation.

APIs have exploded and can be found in nearly everything we do online – logins, payments, transfers, online banking, autonomous driving, and everything else. Driven by modular, cloud-native applications, mobile device ubiquity, digital assistants and smart-home appliances, APIs are the connective tissue for all things digital. The explosive growth in API use is understandable, given the business benefits of an API-first application development methodology.

 

“API breaches have plagued numerous high-profile organisations in recent months, elevating the need for CISOs to prioritise API protection. Attackers are getting more creative and specific in their tactics, and traditional protection techniques are no longer enough,” said Ameya Talwalkar, CEO and founder of Cequence Security. “As attack automation becomes an increasingly prevalent threat against APIs, it’s critical that organisations have the tools, knowledge and expertise to defend against them in real- time.”

The second half of 2022 (June 1 to December 31) marked a significant turning point in the security landscape. The Cequence CQ Prime Threat Research Team observed a noteworthy shift in cybercriminals’ tactics, techniques and procedures (TTPs). In several high-profile incidents, application programming interfaces (APIs) emerged as a primary attack vector, posing a new and significant threat to organizations’ security posture.

 

Key findings include:

● Shadow APIs Spike 900%, Highlighting a Lack of API Visibility: In the second half of 2022 alone, approximately 45 billion search attempts were made for shadow APIs, marking a 900% increase from the 5 billion attempts made in the first half of 2022.

● Holiday Season Sees 550% Increase in Unique Threats: There was a 550% increase in the number of unique TTPs employed by attackers, rising from approximately 2,000 in June to a staggering 11,000 towards the end of 2022.

● Attackers Increasingly Combine API and Web Application Security Tactics: From June 2022 to October 2022, attackers favoured traditional application security tactics; however, as the holidays approached, there was a 220% surge in API security tactics.

● Attack Surface Sprawl Highlights the Telecom API Protection Challenge: Most re-tool attempts in the telecom industry were entirely new TTPs, which shows that the threat tactics utilised are diverse, sophisticated, and persistent.

● New OWASP API Threat Category API8 – Lack of Protection from Automated Threats, Validated: The CQ Threat Research Team previously identified the need for API10+ to go beyond the OWASP API Top 10 to include protection against automated attacks. The threat report findings and the addition of API8: – Lack of Protection from Automated Threats in the OWASP API Security Top 10 2023RC confirm the past observations made by Cequence and endorse the inclusion of native bot mitigation capabilities to a robust API security program.

 

The report clearly demonstrates that the API threat landscape is constantly evolving, and organisations need to be vigilant in protecting their APIs and web applications from automated threats (bots) and vulnerability exploits. Attackers are becoming more sophisticated and API-specific in their tactics, and traditional protection techniques continue to provide ineffective defence.

 

“Our research is vital in providing organisations with the necessary tools and knowledge to mitigate attacks in real-time,” Talwalkar continued. “By staying ahead of the curve and understanding the latest attack methods and tools, organisations can achieve Unified API Protection and build the awareness and confidence needed to protect their APIs from even the most sophisticated attacks.”

The rise in the utilization of API security TTPs underscores the importance for organizations to adopt a comprehensive and proactive approach to their API security posture. By conducting regular API threat surface assessments, API specification anomaly detection, and implementing real-time automated threat (bot) detection and mitigation measures, businesses can prevent attacks from progressing beyond the reconnaissance stages, limiting the impact of any potential business disruption and security events.

To protect against these threats, it’s important to adopt a comprehensive approach to API security that considers the perspectives of attackers, defenders, and developers, along with governance, risk, and compliance (GRC) officers. Each viewpoint has specific qualities that need to be addressed to ensure a comprehensive security posture. Defenders should be focusing on key metrics, detection tools, and mechanisms to mitigate potential threats. Developers and defenders need to satisfy the perspective of GRC officers by being able to check inventory and ensure APIs are following instructions while not exposing sensitive data. Developers should focus on integrating security measures into the CI/CD pipeline and improving the resilience of APIs to automation attacks. By taking a comprehensive approach to API security and considering these various viewpoints, organizations can better protect their systems and data from emerging threats.

 

● To find out more, register for the webinar on Thursday, June 22, 2023 “API Protection Report: Second Half Findings” at 11 am PST, 11 am BST and 11 am AEST via www.cequence.ai.

 

About Cequence Security

Cequence Security, the pioneer of Unified API Protection, is the only solution that unifies API discovery, inventory, compliance, dynamic testing with real-time detection and native mitigation to defend against fraud, business logic attacks, exploits and unintended data leakage. Cequence Security secures more than six billion API transactions a day and protects more than two billion user accounts across our Fortune 500 customers. Learn more at www.cequence.ai.

]]>
https://tbtech.co/news/shadow-api-usage-surges-900/feed/ 0
The Future of Cloud: A Realistic Look at What’s Ahead https://tbtech.co/news/the-future-of-cloud-a-realistic-look-at-whats-ahead/?utm_source=rss&utm_medium=rss&utm_campaign=the-future-of-cloud-a-realistic-look-at-whats-ahead https://tbtech.co/news/the-future-of-cloud-a-realistic-look-at-whats-ahead/#respond Wed, 22 Mar 2023 12:17:05 +0000 http://52.56.93.237?p=254355 Cloud computing has transformed the way we work, communicate, and consume technology. From storing data to running applications, the cloud has become an essential part of our lives. But what does the future hold for this technology? In this article, we’ll take a realistic look at the future of the cloud and what we can expect to see in the coming years.

Continued Growth

The cloud computing market is expected to continue its growth trajectory in the coming years. According to a report by Gartner, the worldwide public cloud services market is forecast to grow 23.1% in 2021 to reach $332.3 billion, up from $270 billion in 2020. The adoption of cloud technology has been accelerated by the COVID-19 pandemic, as businesses have had to rapidly adapt to remote work and digital transformation.

Hybrid Clouds

Hybrid cloud architectures that combine public and private clouds are becoming more popular. This approach enables organizations to take advantage of the scalability and cost-effectiveness of public clouds while retaining control over sensitive data and applications. According to a report by Flexera, 93% of enterprises have a multi-cloud strategy, and 87% have a hybrid cloud strategy.

Edge Computing

Edge computing is an emerging technology that brings computation and data storage closer to the devices and sensors that generate it. This approach reduces latency and enables real-time data processing and analysis, which is critical for applications such as autonomous vehicles, smart cities, and industrial automation. According to a report by MarketsandMarkets, the edge computing market is expected to grow from $3.6 billion in 2020 to $15.7 billion by 2025, at a compound annual growth rate of 34.1%.

Security and Privacy

As more data and applications move to the cloud, security and privacy become even more critical concerns. Cybersecurity threats are evolving and becoming more sophisticated, and data breaches can have severe consequences. In response, cloud providers are investing heavily in security and privacy measures, such as encryption, identity and access management, and threat detection and response. Additionally, regulations such as GDPR and CCPA are increasing the legal and financial risks of data breaches, making security and privacy a top priority for businesses.

Artificial Intelligence

Artificial intelligence (AI) is increasingly being integrated into cloud services, enabling organizations to build and deploy intelligent applications and services quickly and efficiently. AI-powered cloud services can perform complex tasks, such as natural language processing, image recognition, and predictive analytics, at scale. This technology is expected to have a significant impact on various industries, including healthcare, finance, and retail. According to a report by Allied Market Research, the global AI in the cloud market is projected to reach $97.9 billion by 2027, growing at a CAGR of 41.1% from 2020 to 2027.

Serverless Computing

Serverless computing is a new paradigm in cloud computing that allows developers to run code without having to manage servers. This approach reduces operational overheads and enables organizations to focus on developing and deploying applications quickly. According to a report by MarketsandMarkets, the serverless architecture market is expected to grow from $4.25 billion in 2020 to $14.93 billion by 2025, at a CAGR of 28.6%.

Conclusion

The cloud computing industry is rapidly evolving, and new technologies and trends are emerging. The growth of the cloud market is expected to continue, and hybrid clouds, edge computing, security and privacy, artificial intelligence, and serverless computing are expected to be the major drivers of this growth. To stay ahead of the curve, businesses must keep pace with these developments and invest in cloud technologies that align with their goals and objectives. The future of the cloud is bright, and it will continue to transform the way we work, communicate, and consume technology.

]]>
https://tbtech.co/news/the-future-of-cloud-a-realistic-look-at-whats-ahead/feed/ 0