databreach – Launched Tech News https://tbtech.co The Latest On Tech News & Insights Thu, 11 Apr 2024 12:04:00 +0000 en-GB hourly 1 https://tbtech.co/wp-content/uploads/2024/02/cropped-Launched_Icon-32x32.png databreach – Launched Tech News https://tbtech.co 32 32 Fixing the Public Sector IT Debacle https://tbtech.co/news/fixing-the-public-sector-it-debacle/?utm_source=rss&utm_medium=rss&utm_campaign=fixing-the-public-sector-it-debacle https://tbtech.co/news/fixing-the-public-sector-it-debacle/#respond Thu, 11 Apr 2024 12:04:00 +0000 https://tbtech.co/news/fixing-the-public-sector-it-debacle/ Public sector IT services are no longer fit for purpose. Constant security breaches. Unacceptable downtime. Endemic over-spending. Delays in vital service innovation that would reduce costs and improve citizen experience. 

While the UK’s public sector is on the front line of a global escalation in cyberattacks, the number of breaches leading to service disruption, data loss and additional costs to rebuild and restore systems are unacceptable and unnecessary. A lack of expertise, insufficient procurement rigour and a herd mentality have led to over-reliance on a handful of vendors, ubiquitous infrastructure models and identical security vulnerabilities that are quickly and easily exploited. 

Budgets are adequate. Better, more affordable and secure technologies are mature and proven. As Mark Grindey, CEO, Zeus Cloud, argues, it is the broken tender process that is fundamentally undermining innovation and exposing the public sector to devastating security risk.

Broken Systems

There is no doubt that the UK’s public sector organisations are facing an ever-growing security threat. Alongside public bodies in every developed country, state-sponsored attacks are designed to undermine the delivery of essential services. And the cost to recover from these cyberattacks is devastating, with councils spending millions to recover from ransomware attacks in recent years.

The ever-rising threat level is, however, just one part of the story. While public sector bodies are prime targets due to the level of sensitive data held, the impact of attacking critical infrastructure and the appeal of targeting a high-profile organisation, not every public body is enduring repeated downtime as a result of breaches.

Nor does a single hack automatically affect every part of the organisation, leading to a disruption of vital services for days, even weeks. So, what differentiates those organisations, such as Bexley Council and Bedford Council that have a good cyber security track record, from the rest? And, critically, what is the best way to propagate best practice throughout the public sector to mitigate risk?

Broken Tender Process

The issue is not budget. The public sector may constantly claim a lack of funding but money is not the root cause of inadequate security or inconsistent service delivery. The problem is how that money is spent. Despite attempts to improve the rigour of public sector IT investment, the current tendering process is fuelling misdirected and excessive spend.

In theory, an open tender model should ensure that money is well spent. It should guarantee the service is delivered by the best provider. In reality, the vast majority of contracts are allocated to the same handful of large organisations. Which would be fine, if the services delivered were top quality, highly secure and fairly priced. They are not. The public sector is routinely charged three times as much as the private sector for equivalent IT deployments. Three times as much. 

In addition to this endemic overspending, the reliance on a small number of vendors radically increases the security threat due to the ubiquity of infrastructure models. When the majority of public sector organisations have relocated to the same public cloud hyperscaler and adopted identical security postures, it is inevitable that a breach at one organisation will be rapidly exploited and repeated in others. 

Inadequate Rigour

The current tender process completely lacks rigour. Given the continued security breaches, why are these vendors not being held to account? Why are they still being awarded new contracts? Indeed, why are they winning the business to rebuild and recover the systems damaged by a security breach that occurred on their watch? When other Managed Services Providers and cloud platforms can offer not only better pricing but a far better security track record. Something is clearly going very wrong in public sector procurement.

The public sector is complicit in this overspending: any vendor attempting to come in and charge a lower (fair) amount is automatically discounted from the tender process. Why? There are multiple reasons, not least that the public sector has been ‘trained’ by the IT industry to expect these inflated costs, but there is also a reliance on dedicated Procurement Officers who lack essential sector expertise. Why for example, is every single system used by Leicester City Council located on the same public cloud platform? It should be impossible for a system breach to extend and expand across every single part of the organisation yet by failing to understand basic security principles, the council set itself up for expensive failure. 

The lack of expertise is a serious concern. Continued reliance on large IT vendors has resulted in many public sector organisations becoming dangerously under-skilled. Given the lack of internal knowledge, organisations often turn to incumbent vendors for information to support the tender process, leading inevitably to further price inflation. Furthermore, when a crisis occurs, reliance on a third party, rather than in-house expertise, leads to inevitable delays that exacerbates problems and results in additional cost to repair and restore systems.

Overdue Oversight

The situation is enormously frustrating for IT vendors with the expertise to deliver lower cost, secure systems. The mis-directed spend has left public sector bodies woefully out of date. Not only are security postures frighteningly old fashioned; but there are unacceptable delays in vital service delivery innovations that would transform the citizen experience and provide operational cost savings.

Given the escalating pressures facing all public sector organisations, change is essential. In-house expertise must be rebuilt to ensure sector experts are involved in the procurement process and pricing expectations must be immediately overhauled: avaricious IT vendors will continue to over charge unless challenged. One option is to appoint an outsourced CTO with broad public and private sector expertise, an individual with the knowledge and experience to call out the endemic over charging and sanity check the procurement process.

It is also important to move away from the herd mentality. Would, for example, an on-premise private cloud solution be a better option than a public cloud hyperscaler? What is the cost comparison of adding in-house security expertise rather than relying on a third party – factoring in, of course, the value of fast response if a problem occurs. It is telling that the handful of local authorities with a good security track record have not adopted the same big vendor, public cloud approach but applied rigour to the procurement process to achieve a more secure and cost-effective approach. Others could and should learn from these organisations. 

Conclusion

Good, effective IT systems underpin every aspect of public sector service delivery and, right now, the vast majority are not fit for purpose. It is, therefore, vital to highlight and celebrate the good performers – and challenge those vendors that continue to overcharge and underperform.

Sharing information between organisations, both to support strategic direction and day to day risk mitigation, is vital to propagate best practice. Critically, by pooling knowledge and expertise, the public sector can begin to regain control over what is, today, a broken model. While the public sector continues to flounder with inadequate security and a lack of knowledge, the IT vendors will continue to win. They need to be held to account and that can only happen if public sector organisations come together to demand more and hold the industry to account.

]]>
https://tbtech.co/news/fixing-the-public-sector-it-debacle/feed/ 0
23andMe sparks rethink about safeguarding data https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/?utm_source=rss&utm_medium=rss&utm_campaign=23andme-sparks-rethink-about-safeguarding-data https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/#respond Wed, 31 Jan 2024 00:01:00 +0000 https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/ Recently 23andMe, the popular DNA testing service, made a startling admission: hackers had gained unauthorised access to the personal data of 6.9 million users, specifically their ‘DNA Relatives’ data. 

This kind of high-profile breach made headlines globally, and naturally highlights the need for stringent security measures when handling organisational data – especially the type of sensitive genetic information that 23andMe is responsible for. Further, although the hacker appears to have to use a tactic known as credential stuffing to access 23andMe’s customer accounts, it does pose wider questions to organisations, IT managers and security experts about the security measures that are used more generally to keep organisational and consumer data safe from threat actors? With a key question for many organisations today surrounding that of where and how they host their data – especially when you consider 23andMe’s data has reportedly been stored solely on cloud servers? 

Mark Grindey, CEO, Zeus Cloud explains that one way that organisations can mitigate similar risks is by implementing on-premises and hybrid cloud solutions. He covers how these technologies can play a vital role in safeguarding organisational data – such as 23andMe’s important genetic data – and shares insights about the key steps organisations can take to be more secure.  

 

Achieving direct control of data

In 23andMe’s case, its compromised ‘DNA Relatives’ data holds immense value and is extremely sensitive. This is because it enables individuals to connect with potential relatives based on shared genetic information. However, this kind of valuable data often becomes a target for cybercriminals, who are seeking to exploit it for various purposes: including identity theft, fraud, and other nefarious activities. Therefore, to protect this type of information, organisations need to implement robust security measures that ensure the confidentiality, integrity, and availability of the data. 

 

On-premises solutions enables part of this protection to take place effectively and involves hosting data and applications within an organisation’s own physical infrastructure. This approach gives organisations direct control over their data and allows them to implement rigorous security protocols. For instance, by keeping genetic data on-site, an organisation like 23andMe is able to secure it behind multiple layers of firewalls and intrusion detection systems, reducing the risk of external breaches. Additionally, access to this data can be restricted to authorised personnel only, minimising the potential for internal data leaks. 

 

Another school of thought that is worth considering, for many organisations, is to use hybrid cloud solutions. This approach combines the advantages of on-premises and cloud-based services. Organisations can use public or private clouds appropriately to store non-sensitive data while keeping sensitive information – like genetic information in 23andMe’s case – on-premises. This method provides organisations the flexibility to scale resources and accommodate fluctuating user demand, while still maintaining strict data control. When set up and configured correctly – using encrypted connections and robust authentication mechanisms – hybrid cloud solutions ensure that secure data transmission between the on-premises and cloud environments takes place. 

 

Steps Towards Preventing Data Breaches

While implementing on-premises and hybrid cloud solutions can significantly reduce the risk of data breaches and unauthorised access to data, there are several other crucial steps and techniques that organisations can take and make use of to secure and protect data from breaches. 

Obvious as it may seem to many in the industry, today it is vital to encrypt data during the storage and transmission thereof. This renders compromised data meaningless to unauthorised users, even if threat actors manage to gain access to it. Implementing multi-factor authentication is vital too. It strengthens access controls and adds an extra layer of security. Users trying to access data should, effectively, be required to provide multiple forms of verification, such as passwords, biometrics, or smart cards to access their data genetic data. In 23andMe’s case, while they do offer this approach to their users, perhaps the use thereof should be made to be mandatory given their recent breach?

 

Aside from this, it is recommended that organisations conduct frequent security audits to identify vulnerabilities and ensure compliance with industry standards and best practices. This involves testing the effectiveness of security protocols and promptly addressing any discrepancies.

Finally, no robust security framework is complete without equipping employees with proper training and awareness about their responsibilities towards securing data and protecting it. Regular security awareness programmes help staff understand their roles and responsibilities in protecting data. 

Even though 23andMe claims that it exceeds industry data protection standards and has achieved three different ISO certifications to demonstrate the strength of its security program, and that it actively routinely monitors and audits its systems, an incident like this, along with the PR and media attention that it has gained, will undoubtedly have caused its team to evaluate all of its security parameters including the further training of its team in order to ensure this doesn’t occur in future.  

 

Conclusion

23andMe’s recent data breach serves as a wake-up call for organisations handling data, especially sensitive genetic information provided by consumers. This kind of incident will have naturally caused it to reconsider its security policies and approach towards securing organisational and customer data. Today, as other organisations consider their approach towards security and protecting data, many will review where and how their data is stored, managed and accessed. 

This is especially true of banks, telcos, insurance companies and many other kinds of firms. On-premises and hybrid cloud solutions provide powerful and effective options here too. They enable organisations to fortify their security measures and protect against potential data breaches. 

The combination of direct control over data provided, along with tools and tactics like encryption, multi-factor authentication, security audits, and employee training creates a comprehensive defence against unauthorised access of organisational data. All of which the likes of 23andMe, along with many other organisations, will be considering and prioritising as they strive to adopt more robust security measures that ensure the privacy and integrity of organisational, and consumer, data.

]]>
https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/feed/ 0
The Blame Game: The problem of post-incident review https://tbtech.co/news/the-blame-game-the-problem-of-post-incident-review/?utm_source=rss&utm_medium=rss&utm_campaign=the-blame-game-the-problem-of-post-incident-review https://tbtech.co/news/the-blame-game-the-problem-of-post-incident-review/#respond Wed, 21 Dec 2022 10:12:00 +0000 http://52.56.93.237?p=253816 You’ve been breached, gone through the Incident Response (IR) plan. Identified, mitigated, and informed the necessary authorities and communicated with affected parties. But the next stage is perhaps the most crucial part of the process and the one that also tends to be mismanaged. Post incident review tries to learn from the process, what just happened, how it was dealt with, and where there’s room for improvement. 

Much like the post-match analysis that follows every football game, post incident review assesses the highs and lows in order to determine how effective IR has been and how defences can be bolstered to strengthen the organisation’s ability to withstand future attacks. 

The review seeks to capture the entire span of the incident and typically comprises a three step process, according to industry body, CREST. The review details all the steps taken during IR, and this is followed by the formal documentation of all the lessons learned which are supplied to all stakeholders. The final stage then sees the IR plan itself revised and updated. In theory, this should then lead to improvements that will help mitigate the risk of a recurrence, shorten detection time, improve diagnosis, prioritisation and the allocation of resource. 

Long term repercussions

This kind of wash-up is vitally important because breaches can cost big time. Research carried out into how a data breach affects stock price found the affect can be cumulative, shaving significant value off the business, so that after a year the share price drops 8.6 percent on average, dropping further to 11.3 percent after two years and 15.6 percent after three years, even though the impact of the data breach itself will have lessened. The average cost of a data breach in 2022 is said to be $4.35million but those businesses with an IR team and which regularly test the IR plan are estimated to save $2.66million, according to IBM’s Cost of a Data Breach Report 2022. 

Reducing the prospect of further breaches is therefore very much in the interests of senior management. But, according to the ISC(2) Cybersecurity Workforce Study 2022, the focus of corporate tends to be predominantly on the performance of the security team itself, with 40 percent saying they felt under increased scrutiny and 41 percent reporting an increase in workloads post-breach. Interestingly, very little investment then tended to result, with only 20 percent saying a high-profile breach would lead to further spend and only 16 percent in the hiring of more staff. And, somewhat worryingly, 8 percent said no changes were made at all.

Consequently, this type of post-breach mismanagement tends to lead to another less well-charted impact – workforce attrition. Feeling under-supported and overwhelmed, the security team is placed at higher risk of burnout. The same report found a negative culture, burnout and stress came in third and fourth place, respectively, after salary and career progression, as the top causes of why cybersecurity staff quit. This is cause for concern because, at a time when skills shortages are growing, you really don’t want to lose valuable cybersecurity resource. (The survey found that the cybersecurity skills gap increased 73 percent over the course of the year, equivalent to 56,811 unfilled vacancies in the UK, while the Department for Culture, Media and Sport predicts an annual shortfall of 14,000 entrants into the profession.)

Of course, reviewing data breaches is also a regulatory obligation. The Information Commissioner’s Office (ICO) states that breaches should be analysed to prevent a recurrence, that the type, volume and cost of the breach should be monitored, and that trend analysis should be conducted over time to facilitate understanding. It will also want to see awareness of the lessons learned and evidence that the steps taken were effective. 

With the ISC(2) report revealing there’s little investment being made in measures that would prevent a recurrence, it’s clear that some companies would be viewed as non-compliant by the ICO and they’re not in the minority. The OWASP Top 10 Privacy Risks places insufficient data breach response third on the list and released its counter measures this year. Actions classed as ‘insufficient’ included not informing affected parties about the breach, a failure to remedy the situation by fixing the cause, and/or not attempting to limit the data leak. 

Cause and effect

It’s important to realise here that many of these failings are not due to technology but a poor security culture. In fact, the breach itself can often be indicative of this, systemic issues or operational failure. If security is not embedded throughout the organisation and its business processes, the security team becomes solely responsible and is doomed to fail.

So what can organisations do to improve their post-breach response, boost morale and staff retention? In reality, any serious data breach should result in changes not just to the IR plan but to policies and procedures and potentially further investment in resource whether that be people or technology.  

The cybersecurity team needs to be equipped with the necessary resource to prevent recurrence but they also need to be supported and for that to happen, security should be regarded as a shared responsibility throughout the business. Regular auditing, both internally and externally such as through a penetration test, can provide ongoing assessment on the effectiveness of the IR plan and can provide some objectivity. And the IR plan itself should be regarded as a ‘living document’ and be regularly updated in line with any change to the business, such as new people, acquisitions, service offerings etc. 

That said, we also need to eradicate the culture of blame. Senior management needs to listen to and value the analysis from the cybersecurity team and look at where investment can be made to effectively and efficiently reduce risk. Deprived of grass roots support, the danger is the team will become disillusioned and disaffected, resulting in quiet quitting or them leaving within the next few years. Therefore, any investment post breach isn’t just about reducing the likelihood of a recurrence, it’s an investment in the team itself and serves as recognition of and validation of their efforts and could well make the difference between whether they stay or go.

]]>
https://tbtech.co/news/the-blame-game-the-problem-of-post-incident-review/feed/ 0