Data – Launched Tech News https://tbtech.co The Latest On Tech News & Insights Thu, 11 Apr 2024 12:04:00 +0000 en-GB hourly 1 https://tbtech.co/wp-content/uploads/2024/02/cropped-Launched_Icon-32x32.png Data – Launched Tech News https://tbtech.co 32 32 Fixing the Public Sector IT Debacle https://tbtech.co/news/fixing-the-public-sector-it-debacle/?utm_source=rss&utm_medium=rss&utm_campaign=fixing-the-public-sector-it-debacle https://tbtech.co/news/fixing-the-public-sector-it-debacle/#respond Thu, 11 Apr 2024 12:04:00 +0000 https://tbtech.co/news/fixing-the-public-sector-it-debacle/ Public sector IT services are no longer fit for purpose. Constant security breaches. Unacceptable downtime. Endemic over-spending. Delays in vital service innovation that would reduce costs and improve citizen experience. 

While the UK’s public sector is on the front line of a global escalation in cyberattacks, the number of breaches leading to service disruption, data loss and additional costs to rebuild and restore systems are unacceptable and unnecessary. A lack of expertise, insufficient procurement rigour and a herd mentality have led to over-reliance on a handful of vendors, ubiquitous infrastructure models and identical security vulnerabilities that are quickly and easily exploited. 

Budgets are adequate. Better, more affordable and secure technologies are mature and proven. As Mark Grindey, CEO, Zeus Cloud, argues, it is the broken tender process that is fundamentally undermining innovation and exposing the public sector to devastating security risk.

Broken Systems

There is no doubt that the UK’s public sector organisations are facing an ever-growing security threat. Alongside public bodies in every developed country, state-sponsored attacks are designed to undermine the delivery of essential services. And the cost to recover from these cyberattacks is devastating, with councils spending millions to recover from ransomware attacks in recent years.

The ever-rising threat level is, however, just one part of the story. While public sector bodies are prime targets due to the level of sensitive data held, the impact of attacking critical infrastructure and the appeal of targeting a high-profile organisation, not every public body is enduring repeated downtime as a result of breaches.

Nor does a single hack automatically affect every part of the organisation, leading to a disruption of vital services for days, even weeks. So, what differentiates those organisations, such as Bexley Council and Bedford Council that have a good cyber security track record, from the rest? And, critically, what is the best way to propagate best practice throughout the public sector to mitigate risk?

Broken Tender Process

The issue is not budget. The public sector may constantly claim a lack of funding but money is not the root cause of inadequate security or inconsistent service delivery. The problem is how that money is spent. Despite attempts to improve the rigour of public sector IT investment, the current tendering process is fuelling misdirected and excessive spend.

In theory, an open tender model should ensure that money is well spent. It should guarantee the service is delivered by the best provider. In reality, the vast majority of contracts are allocated to the same handful of large organisations. Which would be fine, if the services delivered were top quality, highly secure and fairly priced. They are not. The public sector is routinely charged three times as much as the private sector for equivalent IT deployments. Three times as much. 

In addition to this endemic overspending, the reliance on a small number of vendors radically increases the security threat due to the ubiquity of infrastructure models. When the majority of public sector organisations have relocated to the same public cloud hyperscaler and adopted identical security postures, it is inevitable that a breach at one organisation will be rapidly exploited and repeated in others. 

Inadequate Rigour

The current tender process completely lacks rigour. Given the continued security breaches, why are these vendors not being held to account? Why are they still being awarded new contracts? Indeed, why are they winning the business to rebuild and recover the systems damaged by a security breach that occurred on their watch? When other Managed Services Providers and cloud platforms can offer not only better pricing but a far better security track record. Something is clearly going very wrong in public sector procurement.

The public sector is complicit in this overspending: any vendor attempting to come in and charge a lower (fair) amount is automatically discounted from the tender process. Why? There are multiple reasons, not least that the public sector has been ‘trained’ by the IT industry to expect these inflated costs, but there is also a reliance on dedicated Procurement Officers who lack essential sector expertise. Why for example, is every single system used by Leicester City Council located on the same public cloud platform? It should be impossible for a system breach to extend and expand across every single part of the organisation yet by failing to understand basic security principles, the council set itself up for expensive failure. 

The lack of expertise is a serious concern. Continued reliance on large IT vendors has resulted in many public sector organisations becoming dangerously under-skilled. Given the lack of internal knowledge, organisations often turn to incumbent vendors for information to support the tender process, leading inevitably to further price inflation. Furthermore, when a crisis occurs, reliance on a third party, rather than in-house expertise, leads to inevitable delays that exacerbates problems and results in additional cost to repair and restore systems.

Overdue Oversight

The situation is enormously frustrating for IT vendors with the expertise to deliver lower cost, secure systems. The mis-directed spend has left public sector bodies woefully out of date. Not only are security postures frighteningly old fashioned; but there are unacceptable delays in vital service delivery innovations that would transform the citizen experience and provide operational cost savings.

Given the escalating pressures facing all public sector organisations, change is essential. In-house expertise must be rebuilt to ensure sector experts are involved in the procurement process and pricing expectations must be immediately overhauled: avaricious IT vendors will continue to over charge unless challenged. One option is to appoint an outsourced CTO with broad public and private sector expertise, an individual with the knowledge and experience to call out the endemic over charging and sanity check the procurement process.

It is also important to move away from the herd mentality. Would, for example, an on-premise private cloud solution be a better option than a public cloud hyperscaler? What is the cost comparison of adding in-house security expertise rather than relying on a third party – factoring in, of course, the value of fast response if a problem occurs. It is telling that the handful of local authorities with a good security track record have not adopted the same big vendor, public cloud approach but applied rigour to the procurement process to achieve a more secure and cost-effective approach. Others could and should learn from these organisations. 

Conclusion

Good, effective IT systems underpin every aspect of public sector service delivery and, right now, the vast majority are not fit for purpose. It is, therefore, vital to highlight and celebrate the good performers – and challenge those vendors that continue to overcharge and underperform.

Sharing information between organisations, both to support strategic direction and day to day risk mitigation, is vital to propagate best practice. Critically, by pooling knowledge and expertise, the public sector can begin to regain control over what is, today, a broken model. While the public sector continues to flounder with inadequate security and a lack of knowledge, the IT vendors will continue to win. They need to be held to account and that can only happen if public sector organisations come together to demand more and hold the industry to account.

]]>
https://tbtech.co/news/fixing-the-public-sector-it-debacle/feed/ 0
Successfully Accelerating ERP-Cloud Adoption in the Public Sector https://tbtech.co/news/successfully-accelerating-erp-cloud-adoption-in-the-public-sector/?utm_source=rss&utm_medium=rss&utm_campaign=successfully-accelerating-erp-cloud-adoption-in-the-public-sector https://tbtech.co/news/successfully-accelerating-erp-cloud-adoption-in-the-public-sector/#respond Wed, 06 Mar 2024 15:03:00 +0000 https://tbtech.co/news/successfully-accelerating-erp-cloud-adoption-in-the-public-sector/ The demand for cloud-based ERP systems is gaining pace throughout the public sector. However, with support deadlines for existing on-premise solutions looming and widespread budget challenges, the pressure is on to find and implement the right solution as quickly as possible.

A new mindset is key. How cloud-ready and ready for digital transformation is the organisation? The benefits of deploying standard cloud-based products are closely aligned with the adoption of best practice processes and the shift away from custom development. Whilst many local authorities instinctively recognise the value of this approach, transitioning that sensibility into a cloud-product procurement process can be challenging. Rather than going to the market with an extensive list of user-led requirements, the emphasis changes to ensuring effective organisational change management to support best practice adoption and fast-track full utilisation of any purchased software.

So how can a local authority ensure it has not only the right cloud-based ERP technology but also the right implementation partner? Don Valentine, Commercial Director at Absoft, explains.

Digital Transformation Deadline

Local authorities throughout the UK are facing growing pressures to accelerate their digital transformation programmes. Budgetary demands are making it imperative to explore the power of technology to automate processes, improve efficiency, and enable effective service delivery at a time of endemic skills shortages. Local authorities also require far more insight into both the value of budgetary spending and their progress towards net zero targets. In addition, many organisations must upgrade or change their existing on-premise ERP solutions before the software falls out of support.

Cloud-based ERP solutions will automate and streamline processes, enabling local authorities to be more effective with the same or fewer staff – a key requirement given prevailing difficulties in recruiting skilled workers. A single source of accurate and up-to-date information, combined with intuitive analytics will mean that the public sector organisation can model changes in business rates, council tax rates, headcount, or inflation into budget planning and forecasting. Trusted information will support confident projections for the next one, three, or even five years, enabling a local authority to demonstrate a longer-term outlook to the government and its local citizens.

The benefits are compelling, but any digital investment faces enormous public scrutiny. Local authorities need to both demonstrate value for money and, critically, achieve a seamless, effective implementation that delivers immediate benefits.

Different Implementation Approach

Over the past year, early adopters of cloud-based ERP solutions have led the way – and in the process created growing awareness not only about the tangible benefits that can be achieved, but the need to adopt the right approach to avoid expensive and high- profile mistakes. There is, as a result, a fast-growing understanding of the value of the ‘adopt not adapt’ model. Avoiding the cost, risk, and delay associated with custom development and adopting best practices processes built into cloud ERP is now recognised as a key factor in achieving a successful implementation.

To maximise the value of the public cloud, including the twice-yearly updates that provide immediate access to continuous innovation while eradicating the burden of scheduling and managing upgrades, local authorities need to opt for a clean build and best practice processes. This approach, however, is a change from traditional public sector procurement practices which has seen the creation of very detailed tenders with long lists of expected features and functions. Despite buying into the sensibility of “adopt not adapt”, some local authorities are still prone to issuing tenders based on extensive lists of user requirements, which aren’t always in sympathy and indeed are sometimes at odds with each other. This approach is unlikely to lead the authority to either the right technology or implementation partner fit.

Rather than creating an exhaustive list of often irrelevant requirements, local authorities should be assessing the best practice models offered by ERP vendors. They should consider the implementation models and tools that have been developed to ease the process and support organisational change management. Tools such as a Cloud Mindset Assessment or a Readiness Assessment should be deployed in support of the procurement process. A Cloud Mindset Assessment highlights the diverse levels of digital maturity between individual departments and functions, a vital insight in supporting the necessary operational Change Management required to make the implementation a success. The Readiness Assessment highlights the differences between current processes and best practices. Additionally, the procurement process needs to focus on looking for an implementation partner with the experience and understanding to support a successful migration to the cloud, and very importantly will be a cultural fit with the organisation implementing the ERP.

Embracing Self-Enablement

Local authorities also should recognise that the day-to-day implementation process is now inherently different. There is no waiting for months while a partner creates specifications, defines processes and builds custom developments. In the new world, a local authority is inherently involved in the process, working side by side with an implementation partner from the outset and using tools such as SAP’s Activate project methodology which provides clear deliverables and instructions for both end-user organisations and partners throughout the six-phase project.

The shift to self-enablement is one of the most significant changes associated with cloud projects compared to on-premise. For example, with cloud-based deployments, there is an expectation that the user base will log into and play around with a ‘starter system’ very early in the project. Users can run processes, look at the potential home screen, and navigate the best practice models within a safe-place starter system.

As a result, when they join Fit to Standard workshops with an implementation partner, they already have a feel for the system and can contribute to the discussion meaningfully based on experience of using the system. These workshops will highlight any significant process change between current and future models, which can be addressed within the organisation’s change management strategy, further accelerating the successful migration process.

Conclusion

The benefits of digital transformation are clear. Not only are the cloud-based ERP systems incredibly functionally rich and intuitive, but the supporting tools are designed to ensure a local authority can get the best out of the system from day one – if the mindset and implementation model are correct. The shift in attitude needs to start even before the procurement process is initiated. Taking the time to get a feel for the new cloud-based ERP technologies and the new deployment models, including self-enablement, will help to clarify the requirements for a successful implementation. This understanding will also inform the skills needed internally to support the process and highlight any additional resources that will be required from a partner.

Gaining this level of understanding about best practices, cloud mindset, and any additional skills that will be required, such as data migration, will transform the relevance of the tender document and the quality of the procurement process. It will also provide vital clarity regarding the budget required. Local authorities that understand the cloud technology concept first, before looking at operational requirements, can be far more focused and insight-driven about what can be achieved with digital transformation, before even considering a specific technology or partner. And that is a vital step in achieving a successful cloud-based ERP deployment.

]]>
https://tbtech.co/news/successfully-accelerating-erp-cloud-adoption-in-the-public-sector/feed/ 0
23andMe sparks rethink about safeguarding data https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/?utm_source=rss&utm_medium=rss&utm_campaign=23andme-sparks-rethink-about-safeguarding-data https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/#respond Wed, 31 Jan 2024 00:01:00 +0000 https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/ Recently 23andMe, the popular DNA testing service, made a startling admission: hackers had gained unauthorised access to the personal data of 6.9 million users, specifically their ‘DNA Relatives’ data. 

This kind of high-profile breach made headlines globally, and naturally highlights the need for stringent security measures when handling organisational data – especially the type of sensitive genetic information that 23andMe is responsible for. Further, although the hacker appears to have to use a tactic known as credential stuffing to access 23andMe’s customer accounts, it does pose wider questions to organisations, IT managers and security experts about the security measures that are used more generally to keep organisational and consumer data safe from threat actors? With a key question for many organisations today surrounding that of where and how they host their data – especially when you consider 23andMe’s data has reportedly been stored solely on cloud servers? 

Mark Grindey, CEO, Zeus Cloud explains that one way that organisations can mitigate similar risks is by implementing on-premises and hybrid cloud solutions. He covers how these technologies can play a vital role in safeguarding organisational data – such as 23andMe’s important genetic data – and shares insights about the key steps organisations can take to be more secure.  

 

Achieving direct control of data

In 23andMe’s case, its compromised ‘DNA Relatives’ data holds immense value and is extremely sensitive. This is because it enables individuals to connect with potential relatives based on shared genetic information. However, this kind of valuable data often becomes a target for cybercriminals, who are seeking to exploit it for various purposes: including identity theft, fraud, and other nefarious activities. Therefore, to protect this type of information, organisations need to implement robust security measures that ensure the confidentiality, integrity, and availability of the data. 

 

On-premises solutions enables part of this protection to take place effectively and involves hosting data and applications within an organisation’s own physical infrastructure. This approach gives organisations direct control over their data and allows them to implement rigorous security protocols. For instance, by keeping genetic data on-site, an organisation like 23andMe is able to secure it behind multiple layers of firewalls and intrusion detection systems, reducing the risk of external breaches. Additionally, access to this data can be restricted to authorised personnel only, minimising the potential for internal data leaks. 

 

Another school of thought that is worth considering, for many organisations, is to use hybrid cloud solutions. This approach combines the advantages of on-premises and cloud-based services. Organisations can use public or private clouds appropriately to store non-sensitive data while keeping sensitive information – like genetic information in 23andMe’s case – on-premises. This method provides organisations the flexibility to scale resources and accommodate fluctuating user demand, while still maintaining strict data control. When set up and configured correctly – using encrypted connections and robust authentication mechanisms – hybrid cloud solutions ensure that secure data transmission between the on-premises and cloud environments takes place. 

 

Steps Towards Preventing Data Breaches

While implementing on-premises and hybrid cloud solutions can significantly reduce the risk of data breaches and unauthorised access to data, there are several other crucial steps and techniques that organisations can take and make use of to secure and protect data from breaches. 

Obvious as it may seem to many in the industry, today it is vital to encrypt data during the storage and transmission thereof. This renders compromised data meaningless to unauthorised users, even if threat actors manage to gain access to it. Implementing multi-factor authentication is vital too. It strengthens access controls and adds an extra layer of security. Users trying to access data should, effectively, be required to provide multiple forms of verification, such as passwords, biometrics, or smart cards to access their data genetic data. In 23andMe’s case, while they do offer this approach to their users, perhaps the use thereof should be made to be mandatory given their recent breach?

 

Aside from this, it is recommended that organisations conduct frequent security audits to identify vulnerabilities and ensure compliance with industry standards and best practices. This involves testing the effectiveness of security protocols and promptly addressing any discrepancies.

Finally, no robust security framework is complete without equipping employees with proper training and awareness about their responsibilities towards securing data and protecting it. Regular security awareness programmes help staff understand their roles and responsibilities in protecting data. 

Even though 23andMe claims that it exceeds industry data protection standards and has achieved three different ISO certifications to demonstrate the strength of its security program, and that it actively routinely monitors and audits its systems, an incident like this, along with the PR and media attention that it has gained, will undoubtedly have caused its team to evaluate all of its security parameters including the further training of its team in order to ensure this doesn’t occur in future.  

 

Conclusion

23andMe’s recent data breach serves as a wake-up call for organisations handling data, especially sensitive genetic information provided by consumers. This kind of incident will have naturally caused it to reconsider its security policies and approach towards securing organisational and customer data. Today, as other organisations consider their approach towards security and protecting data, many will review where and how their data is stored, managed and accessed. 

This is especially true of banks, telcos, insurance companies and many other kinds of firms. On-premises and hybrid cloud solutions provide powerful and effective options here too. They enable organisations to fortify their security measures and protect against potential data breaches. 

The combination of direct control over data provided, along with tools and tactics like encryption, multi-factor authentication, security audits, and employee training creates a comprehensive defence against unauthorised access of organisational data. All of which the likes of 23andMe, along with many other organisations, will be considering and prioritising as they strive to adopt more robust security measures that ensure the privacy and integrity of organisational, and consumer, data.

]]>
https://tbtech.co/news/23andme-sparks-rethink-about-safeguarding-data/feed/ 0
Unlocking productivity and efficiency gains with data management https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management/?utm_source=rss&utm_medium=rss&utm_campaign=unlocking-productivity-and-efficiency-gains-with-data-management https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management/#respond Tue, 04 Jul 2023 17:02:23 +0000 http://52.56.93.237?p=254704 Enterprise data has been closely linked with hardware for numerous years, but an exciting transformation is underway. Data stewards in larger corporations have long been obliged to concentrate on acquiring, overseeing, and upholding data storage infrastructure with hardware. Additionally, they were periodically required to purchase the newest equipment from vendors and to transfer their data to the most up-to-date gear to reap the benefits of the latest developments in terms of efficiency and security

Now, the era of the hardware businesses is gone, as modern data storage and protection capabilities, powered by the cloud, have rendered much of the once-crucial storage legacy technology obsolete. With advanced data services available through the cloud, organisations can forego investing in hardware and abandon infrastructure management in favour of data management. This change is widely recognised, with Gartner Research VP Julia Palmer, an expert on emerging infrastructure technologies and strategies, highlighting the shift in an October 2022 report.

Once your data is no longer tied to a specific facility, location, or hardware, new opportunities arise for leveraging it within your organisation. However, to do so, you must first shift your strategic perspectives on data management and delivery, focusing on these three key rules or requirements: Utilising the cloud for more flexibility and scalability, making data delivery a priority and focusing on securing data. Let’s explore these in more detail:

1. The time is now to transfer data to the cloud

The advantages of shifting your data to the cloud have been apparent for quite some time, as the economical benefits and infinite scalability of object storage have solidified cloud services as the infrastructure of the future. The majority of data storage is now done in the cloud, with over 50% of company data on the cloud, and the pandemic has only increased the urgency to adopt cloud services. 

Utilising cloud services is no longer simply about cutting long-term expenses, minimising physical infrastructure, and enhancing demand scalability; it also enables more agility for your business and transforms the possibilities of data usage.

2. Prioritising data delivery is key to productivity and efficiency

The shift towards modern infrastructure has been ongoing for a while, but the emergence of remote and hybrid work has accelerated this change. Previously, users were stationed at desks near the hardware that stored and protected their data, but now they are spread out everywhere, working from home offices, cafes, client offices, co-working spaces, and more. Users don’t stay put, either, shifting from location to location, and they expect to be able quickly and easily access their data regardless of where they happen to be working.

This transformation in how we work means that applications must be running close to workers’ data, as regardless of industry, where a worker is located, or if they’re using a general or homegrown application, to ensure efficiency and productivity, apps must be close to data to deliver the expected level of performance. Traditional storage hardware and wide area networks are insufficient for this task because the software needs to reach across the wire to access that data. This is where the cloud has become a crucial delivery vehicle for data. Cloud computing allows for increased flexibility and the ability to deliver data to users and applications anywhere in the world.

3. Never compromise on data protection

Last but not least, data protection is crucial and data delivery should not be at the expense of it. Even before the shift to hybrid and remote working, which accelerated during the pandemic, ransomware was a growing threat. The UK government released new estimates in April 2023 that suggested there were around 2.39 million instances of cyber crime across all businesses, with 11% of organisations experiencing cyber crime in the last 12 months. And now there are even more chances for malicious hackers due to the expanded attack surface. This is as more people are retrieving data and systems from various locations, so it is imperative to focus on protecting data while contemplating how to support the flexibility of hybrid and remote work models.

Ignoring one and focusing on the other is not an option. For example, keeping employees in a few major locations for data protection will restrict productivity and harm your talent pool. Conversely, distributing data everywhere without a reliable ransomware recovery plan will put your business at risk of extended downtime or financial exposure. It’s become clear that a comprehensive approach to data protection is critical for businesses to ensure both business efficiency and security globally.

Reaping the benefits from a shift to data management

Even with the underlying risk of ransomware, this transition from managing infrastructure to managing data aligns perfectly with the new flexible way of working. Users can be in the office one day, then at home the next, and collaborating with colleagues, partners and others potentially all over the world. Data centres no longer need to be the centre of data, as data itself is now the centre.

A new approach to enterprise data is now a requirement for businesses, with shifting to the cloud, prioritising data delivery, and honing in on data protection key to successfully transitioning from managing infrastructure to managing data. Embracing this new methodology could also spark larger changes with exciting implications for enterprises as they choose what to do with this newly accessible data. For example, feeding it into new machine learning and artificial intelligence workloads to further drive innovation, workplace productivity and efficiency. 

]]>
https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management/feed/ 0
Unlocking productivity and efficiency gains with data management https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/?utm_source=rss&utm_medium=rss&utm_campaign=unlocking-productivity-and-efficiency-gains-with-data-management-2 https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/#respond Tue, 20 Jun 2023 09:06:00 +0000 http://52.56.93.237/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/ Enterprise data has been closely linked with hardware for numerous years, but an exciting transformation is underway. Data stewards in larger corporations have long been obliged to concentrate on acquiring, overseeing, and upholding data storage infrastructure with hardware. Additionally, they were periodically required to purchase the newest equipment from vendors and to transfer their data to the most up-to-date gear to reap the benefits of the latest developments in terms of efficiency and security

Now, the era of the hardware businesses is gone, as modern data storage and protection capabilities, powered by the cloud, have rendered much of the once-crucial storage legacy technology obsolete. With advanced data services available through the cloud, organisations can forego investing in hardware and abandon infrastructure management in favour of data management. This change is widely recognised, with Gartner Research VP Julia Palmer, an expert on emerging infrastructure technologies and strategies, highlighting the shift in an October 2022 report.

Once your data is no longer tied to a specific facility, location, or hardware, new opportunities arise for leveraging it within your organisation. However, to do so, you must first shift your strategic perspectives on data management and delivery, focusing on these three key rules or requirements: Utilising the cloud for more flexibility and scalability, making data delivery a priority and focusing on securing data. Let’s explore these in more detail:

1. The time is now to transfer data to the cloud

The advantages of shifting your data to the cloud have been apparent for quite some time, as the economical benefits and infinite scalability of object storage have solidified cloud services as the infrastructure of the future. The majority of data storage is now done in the cloud, with over 50% of company data on the cloud, and the pandemic has only increased the urgency to adopt cloud services. 

Utilising cloud services is no longer simply about cutting long-term expenses, minimising physical infrastructure, and enhancing demand scalability; it also enables more agility for your business and transforms the possibilities of data usage.

2. Prioritising data delivery is key to productivity and efficiency

The shift towards modern infrastructure has been ongoing for a while, but the emergence of remote and hybrid work has accelerated this change. Previously, users were stationed at desks near the hardware that stored and protected their data, but now they are spread out everywhere, working from home offices, cafes, client offices, co-working spaces, and more. Users don’t stay put, either, shifting from location to location, and they expect to be able quickly and easily access their data regardless of where they happen to be working.

This transformation in how we work means that applications must be running close to workers’ data, as regardless of industry, where a worker is located, or if they’re using a general or homegrown application, to ensure efficiency and productivity, apps must be close to data to deliver the expected level of performance. Traditional storage hardware and wide area networks are insufficient for this task because the software needs to reach across the wire to access that data. This is where the cloud has become a crucial delivery vehicle for data. Cloud computing allows for increased flexibility and the ability to deliver data to users and applications anywhere in the world.

3. Never compromise on data protection

Last but not least, data protection is crucial and data delivery should not be at the expense of it. Even before the shift to hybrid and remote working, which accelerated during the pandemic, ransomware was a growing threat. The UK government released new estimates in April 2023 that suggested there were around 2.39 million instances of cyber crime across all businesses, with 11% of organisations experiencing cyber crime in the last 12 months. And now there are even more chances for malicious hackers due to the expanded attack surface. This is as more people are retrieving data and systems from various locations, so it is imperative to focus on protecting data while contemplating how to support the flexibility of hybrid and remote work models.

Ignoring one and focusing on the other is not an option. For example, keeping employees in a few major locations for data protection will restrict productivity and harm your talent pool. Conversely, distributing data everywhere without a reliable ransomware recovery plan will put your business at risk of extended downtime or financial exposure. It’s become clear that a comprehensive approach to data protection is critical for businesses to ensure both business efficiency and security globally.

Reaping the benefits from a shift to data management

Even with the underlying risk of ransomware, this transition from managing infrastructure to managing data aligns perfectly with the new flexible way of working. Users can be in the office one day, then at home the next, and collaborating with colleagues, partners and others potentially all over the world. Data centres no longer need to be the centre of data, as data itself is now the centre.

A new approach to enterprise data is now a requirement for businesses, with shifting to the cloud, prioritising data delivery, and honing in on data protection key to successfully transitioning from managing infrastructure to managing data. Embracing this new methodology could also spark larger changes with exciting implications for enterprises as they choose what to do with this newly accessible data. For example, feeding it into new machine learning and artificial intelligence workloads to further drive innovation, workplace productivity and efficiency. 

]]>
https://tbtech.co/news/unlocking-productivity-and-efficiency-gains-with-data-management-2/feed/ 0
The Benefits of Data Centre Redundancy in Digital Age https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age-3/?utm_source=rss&utm_medium=rss&utm_campaign=the-benefits-of-data-centre-redundancy-in-digital-age-3 https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age-3/#respond Wed, 26 Apr 2023 13:04:00 +0000 http://52.56.93.237news/the-benefits-of-data-centre-redundancy-in-digital-age-3/ We don’t like to say the ‘D’ word out loud, but power outages are the main cause for dreaded ‘downtime’ in data centres, usually caused by overheating or equipment failure. 

Wherever the outage originates from, the end result is always the same – data loss, damaged files, and destroyed equipment, meaning significant, sometimes catastrophic, losses of money and thousands of unhappy users.

Last summer, you may have noticed reports of data centres struggling during heatwaves and suffering painfully disruptive downtime. The Uptime Institute found that downtime was regularly costing data centres over £1 million. (https://dcnnmagazine.com/data-centres/data-centre-outages-crestchic-loadbanks/)

So, how can increasingly expensive and damaging downtime be prevented?

Data centres now build redundancy into their infrastructure, allowing critical systems to continue running in the event of an outage.

What is Data Centre Redundancy?

In short, data centre redundancy involves the duplication of critical components of a system in order to improve reliability. A bit like a back up. Data centre redundancy tends to focus on how much spare power can be used as a back up during a power outage.

How can data centres plan a sufficient amount of redundancy in the event of unforeseen power outages?

Large businesses keep their servers in Tier 3 and Tier 4 data centres, which offer high performance and uptime guarantees compared with Tiers 1 and 2. However, each tier offers differing levels of redundancy systems. Tier 3 usually offers N+1, while Tier 4 will provide 2N or 2N+1.

How Many N’s Do Data Centres Require?

In simple terms, N is the measurement of the amount of redundancy equipment needed to keep a data centre running.

N+1 is also called parallel redundancy and ensures that an uninterruptible power supply (UPS) system is always available. It’s like having one extra backup server for every ten so that in the event a primary element fails and requires removal for maintenance, an additional component starts running. N+1 backup solutions operate for a minimum of 72 hours in the event of local or region-wide outages. Yet it is not a complete fail-safe, as it runs on one common circuit, rather than its own separate feed.

How about 2N? Also known as N+N, this offers a fully redundant, wholly mirrored system with two independent systems so that in the event a primary component fails, an identical standby replica can stand in to continue operations.

And to really cover all bases, 2N+1 offers double cover plus one extra piece of equipment, so in the event of an extended outage, there is an extra backup component to cover a failure when the secondary system is running.

Is Data Centre Redundancy the Only Solution?

Having a redundancy system in place is crucial for data centres, but there are other ways to prevent outages.

As we mentioned earlier, outages can be triggered by warmer weather. Many data centres look into alternative cooling methods, whether that be air cooling, liquid cooling or even migrating to cooler countries. 

Having a handle on cooling can not only prevent downtime but it can also save huge amounts of money considering the vast amounts of heat data centres produce.

Improper data centre infrastructure management (DCIM) and unreliable processes can lead to overheating, a leading cause of outages, wasting energy and money. When data centres don’t have accurate data about their systems and where obsolete equipment is located, they can even start overcooling.

Drops or surges in power can cause the shut down or damage of servers.

UPS (Uninterruptible Power Supply) systems should be in place, but there is still the possibility of disruption to cooling systems which could lead to overheating.

Electrical usage sensors can track power consumption by the rack, enabling replacement of obsolete servers with more efficient equipment.

Outages can also occur from human error and inaccurate maintenance and management of servers. Data centres can easily prevent such occurrences with DCIM that streamlines processes and even predicts when outages are about to happen, allowing faster reactions and prevention.

What else can DCIM do to prevent downtime? And can it work alongside data centre redundancy?

The Ultimate Team – DCIM and Data Centre Redundancy

On its own, data centre redundancy is a data centre saver. Likewise, used in isolation, DCIM offers innovative methods of measuring, monitoring and managing data centres, producing optimised environments and increased uptime.

Put the two together, and you have the benefits of a backup bolstered by the advantages of smart prevention software.

Other than cooling, UPS and streamlined processes, how does DCIM help ensure uptime? It offers complete visibility of consumption data, whilst trending historic data offers straightforward planning for future energy consumption.

It’s time to stay ahead of the risk of downtime with a fully efficient and optimised data centre. Implement Assetspire’s next-gen DCIM to monitor your assets and detect any potential issues before they become big problems, minimising disruptive downtime and data loss and saving money.

Gain real-time insight into capacity, current and past utilisation and management of energy sources. Assetspire’s smart solution to DCIM software provides a full and accurate overview of all assets, so you can see exactly where the energy wasting obsolete and outdated equipment is located, offering the insight to be able repurpose older assets or replace them and prevent downtime.

(https://www.assetspire.co.uk/solutions/datacentre-infrastructure-management/)

]]>
https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age-3/feed/ 0
The Benefits of Data Centre Redundancy in Digital Age https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age/?utm_source=rss&utm_medium=rss&utm_campaign=the-benefits-of-data-centre-redundancy-in-digital-age https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age/#respond Wed, 26 Apr 2023 13:04:00 +0000 http://52.56.93.237news/the-benefits-of-data-centre-redundancy-in-digital-age/ We don’t like to say the ‘D’ word out loud, but power outages are the main cause for dreaded ‘downtime’ in data centres, usually caused by overheating or equipment failure. 

Wherever the outage originates from, the end result is always the same – data loss, damaged files, and destroyed equipment, meaning significant, sometimes catastrophic, losses of money and thousands of unhappy users.

Last summer, you may have noticed reports of data centres struggling during heatwaves and suffering painfully disruptive downtime. The Uptime Institute found that downtime was regularly costing data centres over £1 million. (https://dcnnmagazine.com/data-centres/data-centre-outages-crestchic-loadbanks/)

So, how can increasingly expensive and damaging downtime be prevented?

Data centres now build redundancy into their infrastructure, allowing critical systems to continue running in the event of an outage.

What is Data Centre Redundancy?

In short, data centre redundancy involves the duplication of critical components of a system in order to improve reliability. A bit like a back up. Data centre redundancy tends to focus on how much spare power can be used as a back up during a power outage.

How can data centres plan a sufficient amount of redundancy in the event of unforeseen power outages?

Large businesses keep their servers in Tier 3 and Tier 4 data centres, which offer high performance and uptime guarantees compared with Tiers 1 and 2. However, each tier offers differing levels of redundancy systems. Tier 3 usually offers N+1, while Tier 4 will provide 2N or 2N+1.

How Many N’s Do Data Centres Require?

In simple terms, N is the measurement of the amount of redundancy equipment needed to keep a data centre running.

N+1 is also called parallel redundancy and ensures that an uninterruptible power supply (UPS) system is always available. It’s like having one extra backup server for every ten so that in the event a primary element fails and requires removal for maintenance, an additional component starts running. N+1 backup solutions operate for a minimum of 72 hours in the event of local or region-wide outages. Yet it is not a complete fail-safe, as it runs on one common circuit, rather than its own separate feed.

How about 2N? Also known as N+N, this offers a fully redundant, wholly mirrored system with two independent systems so that in the event a primary component fails, an identical standby replica can stand in to continue operations.

And to really cover all bases, 2N+1 offers double cover plus one extra piece of equipment, so in the event of an extended outage, there is an extra backup component to cover a failure when the secondary system is running.

Is Data Centre Redundancy the Only Solution?

Having a redundancy system in place is crucial for data centres, but there are other ways to prevent outages.

As we mentioned earlier, outages can be triggered by warmer weather. Many data centres look into alternative cooling methods, whether that be air cooling, liquid cooling or even migrating to cooler countries. 

Having a handle on cooling can not only prevent downtime but it can also save huge amounts of money considering the vast amounts of heat data centres produce.

Improper data centre infrastructure management (DCIM) and unreliable processes can lead to overheating, a leading cause of outages, wasting energy and money. When data centres don’t have accurate data about their systems and where obsolete equipment is located, they can even start overcooling.

Drops or surges in power can cause the shut down or damage of servers.

UPS (Uninterruptible Power Supply) systems should be in place, but there is still the possibility of disruption to cooling systems which could lead to overheating.

Electrical usage sensors can track power consumption by the rack, enabling replacement of obsolete servers with more efficient equipment.

Outages can also occur from human error and inaccurate maintenance and management of servers. Data centres can easily prevent such occurrences with DCIM that streamlines processes and even predicts when outages are about to happen, allowing faster reactions and prevention.

What else can DCIM do to prevent downtime? And can it work alongside data centre redundancy?

The Ultimate Team – DCIM and Data Centre Redundancy

On its own, data centre redundancy is a data centre saver. Likewise, used in isolation, DCIM offers innovative methods of measuring, monitoring and managing data centres, producing optimised environments and increased uptime.

Put the two together, and you have the benefits of a backup bolstered by the advantages of smart prevention software.

Other than cooling, UPS and streamlined processes, how does DCIM help ensure uptime? It offers complete visibility of consumption data, whilst trending historic data offers straightforward planning for future energy consumption.

It’s time to stay ahead of the risk of downtime with a fully efficient and optimised data centre. Implement Assetspire’s next-gen DCIM to monitor your assets and detect any potential issues before they become big problems, minimising disruptive downtime and data loss and saving money.

Gain real-time insight into capacity, current and past utilisation and management of energy sources. Assetspire’s smart solution to DCIM software provides a full and accurate overview of all assets, so you can see exactly where the energy wasting obsolete and outdated equipment is located, offering the insight to be able repurpose older assets or replace them and prevent downtime.

(https://www.assetspire.co.uk/solutions/datacentre-infrastructure-management/)

]]>
https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age/feed/ 0
The Benefits of Data Centre Redundancy in Digital Age https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age-2/?utm_source=rss&utm_medium=rss&utm_campaign=the-benefits-of-data-centre-redundancy-in-digital-age-2 https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age-2/#respond Wed, 26 Apr 2023 13:04:00 +0000 http://52.56.93.237news/the-benefits-of-data-centre-redundancy-in-digital-age-2/ We don’t like to say the ‘D’ word out loud, but power outages are the main cause for dreaded ‘downtime’ in data centres, usually caused by overheating or equipment failure. 

Wherever the outage originates from, the end result is always the same – data loss, damaged files, and destroyed equipment, meaning significant, sometimes catastrophic, losses of money and thousands of unhappy users.

Last summer, you may have noticed reports of data centres struggling during heatwaves and suffering painfully disruptive downtime. The Uptime Institute found that downtime was regularly costing data centres over £1 million. (https://dcnnmagazine.com/data-centres/data-centre-outages-crestchic-loadbanks/)

So, how can increasingly expensive and damaging downtime be prevented?

Data centres now build redundancy into their infrastructure, allowing critical systems to continue running in the event of an outage.

What is Data Centre Redundancy?

In short, data centre redundancy involves the duplication of critical components of a system in order to improve reliability. A bit like a back up. Data centre redundancy tends to focus on how much spare power can be used as a back up during a power outage.

How can data centres plan a sufficient amount of redundancy in the event of unforeseen power outages?

Large businesses keep their servers in Tier 3 and Tier 4 data centres, which offer high performance and uptime guarantees compared with Tiers 1 and 2. However, each tier offers differing levels of redundancy systems. Tier 3 usually offers N+1, while Tier 4 will provide 2N or 2N+1.

How Many N’s Do Data Centres Require?

In simple terms, N is the measurement of the amount of redundancy equipment needed to keep a data centre running.

N+1 is also called parallel redundancy and ensures that an uninterruptible power supply (UPS) system is always available. It’s like having one extra backup server for every ten so that in the event a primary element fails and requires removal for maintenance, an additional component starts running. N+1 backup solutions operate for a minimum of 72 hours in the event of local or region-wide outages. Yet it is not a complete fail-safe, as it runs on one common circuit, rather than its own separate feed.

How about 2N? Also known as N+N, this offers a fully redundant, wholly mirrored system with two independent systems so that in the event a primary component fails, an identical standby replica can stand in to continue operations.

And to really cover all bases, 2N+1 offers double cover plus one extra piece of equipment, so in the event of an extended outage, there is an extra backup component to cover a failure when the secondary system is running.

Is Data Centre Redundancy the Only Solution?

Having a redundancy system in place is crucial for data centres, but there are other ways to prevent outages.

As we mentioned earlier, outages can be triggered by warmer weather. Many data centres look into alternative cooling methods, whether that be air cooling, liquid cooling or even migrating to cooler countries. 

Having a handle on cooling can not only prevent downtime but it can also save huge amounts of money considering the vast amounts of heat data centres produce.

Improper data centre infrastructure management (DCIM) and unreliable processes can lead to overheating, a leading cause of outages, wasting energy and money. When data centres don’t have accurate data about their systems and where obsolete equipment is located, they can even start overcooling.

Drops or surges in power can cause the shut down or damage of servers.

UPS (Uninterruptible Power Supply) systems should be in place, but there is still the possibility of disruption to cooling systems which could lead to overheating.

Electrical usage sensors can track power consumption by the rack, enabling replacement of obsolete servers with more efficient equipment.

Outages can also occur from human error and inaccurate maintenance and management of servers. Data centres can easily prevent such occurrences with DCIM that streamlines processes and even predicts when outages are about to happen, allowing faster reactions and prevention.

What else can DCIM do to prevent downtime? And can it work alongside data centre redundancy?

The Ultimate Team – DCIM and Data Centre Redundancy

On its own, data centre redundancy is a data centre saver. Likewise, used in isolation, DCIM offers innovative methods of measuring, monitoring and managing data centres, producing optimised environments and increased uptime.

Put the two together, and you have the benefits of a backup bolstered by the advantages of smart prevention software.

Other than cooling, UPS and streamlined processes, how does DCIM help ensure uptime? It offers complete visibility of consumption data, whilst trending historic data offers straightforward planning for future energy consumption.

It’s time to stay ahead of the risk of downtime with a fully efficient and optimised data centre. Implement Assetspire’s next-gen DCIM to monitor your assets and detect any potential issues before they become big problems, minimising disruptive downtime and data loss and saving money.

Gain real-time insight into capacity, current and past utilisation and management of energy sources. Assetspire’s smart solution to DCIM software provides a full and accurate overview of all assets, so you can see exactly where the energy wasting obsolete and outdated equipment is located, offering the insight to be able repurpose older assets or replace them and prevent downtime.

(https://www.assetspire.co.uk/solutions/datacentre-infrastructure-management/)

]]>
https://tbtech.co/news/the-benefits-of-data-centre-redundancy-in-digital-age-2/feed/ 0
The Risk of IT Business as Usual https://tbtech.co/news/the-risk-of-it-business-as-usual/?utm_source=rss&utm_medium=rss&utm_campaign=the-risk-of-it-business-as-usual https://tbtech.co/news/the-risk-of-it-business-as-usual/#respond Mon, 27 Mar 2023 13:03:00 +0000 http://52.56.93.237news/the-risk-of-it-business-as-usual/ IT teams within mid-sized organisations are over-stretched. Resources are scarce, with sometimes skeleton teams responsible for all aspects of IT delivery across large numbers of users. With up to 90% of the team’s time being spent ‘keeping the lights on’, there is minimal scope for the strategic thinking and infrastructure optimisation that business leaders increasingly demand. Yet without IT, businesses cannot function. And in many cases, there will be compliance or regulatory consequences in the event of a data breach.

With cyber security threats rising daily, businesses cannot afford to focus only on Business as Usual (BAU). But without the in-house expertise in security, backup and recovery, or the time to keep existing skills and knowledge at the cutting edge, IT teams are in a high-risk catch-22. Steve Hollingsworth, Director, Covenco and Gurdip Sohal, Sales Director, Covenco explain why a trusted IT partner with dedicated expertise in key areas such as infrastructure, backup and security to the existing IT team, is now a vital component of supporting and safeguarding business.

Unattainable Objectives

Prioritising IT activity and investment is incredibly challenging. While IT teams are being pulled from pillar to post simply to maintain essential services, there is an urgent need to make critical upgrades to both infrastructure and strategy. The challenges are those IT teams will recognise well: cyber security threats continue to increase, creating new risks that cannot be ignored. Business goals – and the reliance on IT – are evolving, demanding more resilience, higher availability and a robust data recovery strategy. Plus, of course, any changes must be achieved with sustainability in mind: a recent Gartner survey revealed that 87% of business leaders expect to increase their investment in sustainability over the next two years to support organisation-wide Environmental, Social and Governance (ESG) goals.

But how can IT Operations meet these essential goals while also responding to network glitches, managing databases and, of course, dealing with the additional demands created by Working from Home (WFH)? Especially when skills and resources are so thin on the ground. While there are some indications that the continued shortage of IT staff may abate by the end of 2023, that doesn’t help any business today. 

Right now, there is simply no time to upskill or reskill existing staff. Indeed, many companies are struggling to keep hold of valuable individuals who are being tempted elsewhere by ever rising salaries. Yet the business risk created by understaffed and overstretched IT teams is very significant: in the most recent fine imposed by the Information Commissioner’s Office (ICO), for example, companies are being warned of complacency and failing to take the essential steps of upgrading software and training staff. 

Differing Demands

With four out of five CEOs increasing digital technology investments to counter current economic pressures, including inflation, scarce talent, and supply constraints, according to Gartner, something has to give if resources remain so stretched. And most IT people will point immediately to the risk of cyber security breach. Few companies now expect to avoid a data breach. According to the 2022 IBM Data Breach survey, for 83% of companies, it’s not if a data breach will happen, but when. And they expect a breach to occur more than once.

The research confirms that faster is always better when detecting, responding to and recovering from threats. The quicker the resolution, the lower the business cost. But how many IT teams have the resources on tap to feel confident in the latest security postures or create relevant data backup and recovery strategies?

These issues place different demands on IT teams. While most organisations will need 24/7 monitoring against the threat of a cyber-attack, in contrast establishing and then maintaining data backup and recovery policies are not skills that are required full time. Most companies need only an annual or bi-annual review and upgrade. Which is where a trusted partner with the ability to deliver an end-to-end service covering infrastructure, backup, managed services and security – that can flex up and down as the business needs it – is now becoming a core resource within the IT Operations team. 

Extended Expertise Resource

A partner with dedicated technical expertise can augment existing skills in such specialist areas. These are individuals who spend every day assessing the latest technologies and solutions, who understand business needs and know how to achieve a best practice deployment quickly and, crucially, right first time.

Taking the time to understand the entire IT environment and assessing the backup and recovery needs, for example, is something that an expert can confidently and quickly achieve without the Business-as-Usual distractions a member of the IT team faces. What is the company’s Recovery Point Objective (RPO) or Recovery Time Objective (RTO)? How long will it take to get back up and running in the event of an attack or server failure? What are the priority systems? How is the business going to deal with a cyber-attack? 

By focusing exclusively on where risks may lie and then implementing the right solutions quickly and effectively, a partner can de-risk the operation. From a VEEAM backup vault in the cloud or instant database copies using IBM FlashSystem, a disaster recovery plan that includes relocation or high availability with a goal of achieving a local recovery within minutes, the entire process can be achieved while allowing the IT team to concentrate on their existing, demanding, roles.

Conclusion

Whether a company needs to expand its infrastructure to support the CEO’s digital agenda or radically improve cyber security, or both, very few IT teams have either the spare capacity or dedicated expertise to deliver. Focusing on Business as Usual is, of course, an imperative – but unfortunately just not enough in a constantly changing technology landscape. 

Partnering with a trusted provider with the capability to deliver a flexible end-to-end service with dedicated skills as and when required to supplement and support the overstretched IT team, is, therefore key to not only keeping the lights on, but also ensuring the business’ current and future needs are effectively addressed. 

]]>
https://tbtech.co/news/the-risk-of-it-business-as-usual/feed/ 0
Top DFS-R issues and solutions for enterprises https://tbtech.co/news/top-dfs-r-issues-and-solutions-for-enterprises/?utm_source=rss&utm_medium=rss&utm_campaign=top-dfs-r-issues-and-solutions-for-enterprises https://tbtech.co/news/top-dfs-r-issues-and-solutions-for-enterprises/#respond Tue, 28 Feb 2023 13:02:00 +0000 http://52.56.93.237?p=254220 DFS-R (Distributed File System Replication) is a technology that allows organisations to replicate files and folders between multiple servers. It is a free utility included in your standard Windows Server operating system. It is designed to replicate data between DFS Namespaces (another utility provided by Microsoft that creates a virtual file system of folder shares). The DFS-R service provides basic replication functionality on your network. It can help ensure that data is available and accessible to users across the organisation, even in the event of server failures or other issues. However, DFS-R can prove to be quite costly in on going management time, and historically questionable reliability. In this article, we will explore some of the top DFS-R issues and solutions for enterprises.

Slow replication speed

One of the most common issues with DFS-R is slow replication speed. DFS-R can throttle bandwidth usage based on a per connection basis with a fixed throttle. This means that if your bandwidth usage increases, DFS-R does not perform “bandwidth sensing” to adapt the throttling based on changing network conditions. 

To resolve this issue, you can consider increasing the bandwidth of your network, upgrading your hardware, or reducing the amount of data being replicated. For example, a Quality of Service (QoS) style throttle helps to avoid slowing your systems down for your users. Even better, a system with advanced, dynamic throttling is best for enterprise-sized systems. This way, the bandwidth usage is based on a percentage of bandwidth available. For instance, you could use 50% of the connection – If the connection is 10Mbps, 50% of the idle connection would be approximately 5Mbps used. If another process consumed 5Mbps of that connection, the throttle would reduce to approximately 2.5Mbps (50% of the free 5Mbps). This allows your file synchronisation system to use more bandwidth when it is available and less when other processes need the bandwidth.

Inconsistent replication

DFS-R may sometimes fail to replicate files and folders consistently across all servers. This can be due to network latency or conflicts between files. To address this issue, you can try increasing the replication schedule, checking for conflicts between files, or running the DFS-R diagnostic report to identify any issues. You can also try implementing a more robust file locking mechanism to prevent simultaneous modifications, or configuring DFS-R to use conflict resolution.

File conflicts and deletions

DFS-R may sometimes encounter file conflicts or deletions, which can cause data loss or corruption. This can be caused by synchronisation errors, or by users modifying files simultaneously. To prevent this issue, you can configure DFS-R to use conflict resolution, or implement file locking mechanisms to prevent simultaneous modifications. However, Microsoft actually recommends to not use DFS-R in an environment where multiple users could update or modify the same files simultaneously on different servers. 

For environments with multiple users scattered around different locations and servers, engineers need a solution – such as Software Pursuits’ SureSync – that minimises the “multiple updates” issue. One method may not suit all needs for large enterprises. That’s why it’s best to look for solutions that offer collaborative file sharing between offices and with file locking and a combination of one-way and multi-way rule methods. 

Moreover, enterprise services such as SureSync make it easier to recover files if something gets accidentally deleted. DFS-R – or other sync tools – is a faithful servant, copying new files, changes and deletions between the systems to maintain a Distributed File System. What happens if a person or application goes rogue and deletes multiple files? The sync tool will be a faithful servant and delete them from the other locations. You will have to go to the backups and restore. With services such as SureSync there is a safety net. The software can store the deleted file(s) in a Backup Path which itself prunes every X number of days. No need to find backup tapes or restore from backup tools. With SureSync you just drag the required file(s) back from the backup folder in Windows Explorer and it copies them back to the other servers. It is quick and simple.

Authentication issues

DFS-R may sometimes encounter authentication issues, which can prevent replication from occurring. This can be caused by incorrect credentials, expired passwords, or incorrect permissions. To resolve this issue, you can ensure that the correct credentials are used, verify permissions, and check for any expired passwords. You can also try implementing a more robust authentication system that requires multi-factor authentication – such as NEOWAVE’s FIDO Alliance secure keys – or other security measures.

DFS-R can work well for some organisations with careful planning and management to ensure that it functions correctly. However, for most large enterprises it is not enough. After all, DFS-R provides limited reporting options, limited ability to synchronise encrypted files, and no ability to synchronise files stored on FAT or ReFS volumes, making it challenging to operate efficiently in today’s hybrid workplace. IT staff must adapt systems for users working from different locations while also managing varying bandwidth speeds at different times. The common issues discussed in this article highlight the need for IT staff to evaluate their file synchronisation and replication systems and determine if alternative solutions are required to meet their organisation’s needs.

By addressing these common issues and implementing the appropriate solutions, you can help ensure that your DFS-R implementation runs smoothly and reliably. In addition, by understanding the risks associated with synchronisation, you can take steps to mitigate those risks and protect your data. By following best practices and staying up-to-date with the latest developments in DFS-R technology, you can help ensure that your organisation is able to take full advantage of the benefits of Distributed File System Replication and DFS Namespace.

]]>
https://tbtech.co/news/top-dfs-r-issues-and-solutions-for-enterprises/feed/ 0