Managing Endpoints : Enterprise Security Mechanisms and Maintenance Schedules Should Not Work in Silos

As the technologies in today’s digital age continue to develop and become more efficient, the capabilities of cybercriminals to exploit new advancements grow with them. Enterprises are faced with the increasingly complex task of protecting their networks and endpoints from malicious activity through secure network infrastructure and safeguarding all endpoints. Trends in threats and their corresponding mitigation techniques come and go, and staffers in the information technologies departments are responsible for keeping systems up to date across the entire spectrum of an enterprise.

This can be a daunting feat for any one team. However, when a number of separate IT divisions within the same company are working in isolation to achieve total security, it can lead to problems that accidentally expose the network to risks. If the branches of IT – and specifically the implementations within the security department – work in segregated silos, it presents an unnecessary problem for the business structure.

Processes Should Not Be Isolated

A silo is a part of a network where data is stored separately from the larger network of an enterprise’s information systems. These isolated storage locations can often contain things like customer information, product data, order histories and other sensitive information. Silos can also refer to a lack of unification amongst various systems related to achieving a single goal.

The concept is often referred to in a negative light by IT professionals because of the inherent limitations it presents for staffers, and in regards to

Enabling Self-healing in Endpoints : Changing the Face of Endpoint Management and Security

The number of cyberattacks leveled against businesses and average people is rising every year. The Identity Theft Resource Center found that data breaches increased 40 percent over the last year. Remaining vigilant about the ever-present danger of a hack is simply a part of life these days, but that doesn’t mean the struggle is hopeless. Enterprises are using new approaches by combining technology with clever planning, to improve business continuity. Enabling self-healing endpoints is one concept that is being increasingly adopted by organizations.

What Does “Self-Healing” Mean?

High system availability has become a necessity for organizations, as many critical fast-paced business functions are dependent on some critical endpoints. Maintaining consistent system configurations with diverse setups, across thousands of endpoints can become a nightmare for IT admins, given the amount time and manual labour involved. By enabling self-healing endpoints, IT admins can ensure preferred system configurations are untampered and always available. The problem of having to check and fix every single endpoint’s control settings is eliminated with this concept.

The idea behind self-healing is to setup resilient endpoints which can revert back to a known state – i.e with approved licensed software with security protocols – instantly, with automation or minimal manual involvement. With APTs, a breach can go dormant/ undetected for long. Self-healing endpoints can avoid the problem of configuration drift, where systems end up with modified configuration, due to unrecorded, unwanted, accumulated changes which modify the setup, and potentially hamper business operations.

Continue Reading…

Tech Roundup : December in Review

2017 was quite a year and, interestingly, December was quite a month in the digital world. The profitability of the technology industry helped Wall Street alleviate losses in the energy sector, continuing the notion that tech is one of the strongest and most reliable fields to invest in. The National Science Foundation, which is a government agency directed at supporting scientific research, awarded millions of dollars to help boost cybersecurity education.

On top of this, director of Army Cyber Institute Colonel Andrew O. Hall discussed the importance of cooperation between the private sector and the military. Finally, the Internet Governance Forum brought up the ever-important topic of gender inequality within the tech industry.

While this certainly isn’t everything that happened in tech during December, these instances show what needs to be improved upon in the new year. While tech is certainly monetarily valued, the lack of cybersecurity professionals is frightening. What’s more, the only way to grow further is to utilize all resources available, both within the confines of gender as well as encouraging teamwork between private and public sectors.

Tech Helped Prop Up Wall Street In December

Technology has always been a valued sector on Wall Street, but a recent surge on the stock market showed just how much of a tent pole it really is. According to Reuters, traders saw a noticeable loss in energy stocks in late December. However, there isn’t much to worry about as gains within technology

Meltdown and Spectre Vulnerabilities – Are Faronics Solutions Affected?

Everybody is by now aware of the vulnerabilities baked in to many modern processors. These vulnerabilities, called Meltdown and Spectre, can potentially allow for malicious software to extract data from the kernel memory of an affected system. This is memory that would normally be used to store information in a secure manner to prevent other applications from accessing it. These vulnerabilities potentially affect every processor released in well over a decade that support specific optimizations intended to speed up the operation of the processor.

So how does this work?

These exploits work by attempting to force the processor to run a number of instructions that access areas of memory would not normally be accessible speculatively (out of order) so that the results of those instructions wind up getting saved in the CPU’s cache. When the CPU catches up to the instructions that were run out of order and realizes that they should not have had access to that data it will abort those instructions, but the information in the CPU cache may still be changed.

The attackers can then leverage other exploits to extract this data from the CPU Cache and view information that may have been contained in areas of the systems memory that they would not normally have access to – including possibly reaching outside of a virtual machine and into the host – or other virtual machines on the same host.

Who’s vulnerable?

In theory any processor that supposed speculative execution is going to be vulnerable in

Business Continuity Planning : How to Approach Enterprise BCP in 2018

It can be hard for company owners to think about what would happen if their company’s network failed. So much of the modern business world is held in a digital environment, and losing that can make for some devastating losses. According to Gartner, a single hour of network downtime costs the average organization $300,000. On top of this, an extended downtime event can cause irreparable damage to your image. Business continuity planning has become more critical than ever.

Thinking about that can feel like too much for some administrators, but thinking about – and planning for – the worst is the best way to stay safe in the event of a major catastrophe. As we head into the new year, let’s take some time to discuss the most important tips for ensuring business continuity in 2018.

Rethink Your Definition Of Disaster

When most people think of a disaster, they tend to imagine enormous natural events like a blizzard or tornado. These problems are certainly a possibility and need to be dealt with, but they aren’t the only issues you need to worry about. In fact, your biggest concern should be the person on the other end of the paychecks you send out.

Employees making mistakes is one of the largest causes of downtime across every industry, with a survey from Veriflow finding that 25 percent of respondents believing it to contribute to frequent downtime, hampering business continuity. The statistics are even worse in the data center. Rick Schuknecht, who is