IT Management https://www.webpronews.com/technology/ Breaking News in Tech, Search, Social, & Business Tue, 15 Oct 2024 17:13:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 IT Management https://www.webpronews.com/technology/ 32 32 138578674 Hackers Claim to Have Breached Cisco As Company Investigates https://www.webpronews.com/hackers-claim-to-have-breached-cisco-as-company-investigates/ Tue, 15 Oct 2024 16:46:32 +0000 https://www.webpronews.com/?p=609401 Hacker are claiming to have breached Cisco and stolen data, with the company saying it is investigating the claims.

According to BleepingComputer, bad actors have been trying to sell data purportedly stolen from Cisco via online forums. The hackers making the claims are the well-known “IntelBroker,” working along with “EnergyWeaponUser and “zjj.”

Tune in as we dive into hackers’ claims of breaching Cisco!

 

“Compromised data: Github projects, Gitlab Projects, SonarQube projects, Source code, hard coded credentials, Certificates, Customer SRCs, Cisco Confidential Documents, Jira tickets, API tokens, AWS Private buckets, Cisco Technology SRCs, Docker Builds, Azure Storage buckets, Private & Public keys, SSL Certificates, Cisco Premium Products & More!,” reads the post on one hacking forum.

Cisco says it is aware of the hackers’ claims and it is investigating their validity.

“Cisco is aware of reports that an actor is alleging to have gained access to certain Cisco-related files,” a Cisco spokesperson told BleepingComputer.

“We have launched an investigation to assess this claim, and our investigation is ongoing.”

The hackers say they breached Cisco on October 6 and provided samples of the purported stolen data, although they did not provide details on how the hack was carried out.

]]>
609401
Inside Yum! Brands’ Data Revolution: How CDO Cameron Davies is Transforming Customer Experiences Globally https://www.webpronews.com/inside-yum-brands-data-revolution-how-cdo-cameron-davies-is-transforming-customer-experiences-globally/ Fri, 11 Oct 2024 18:10:44 +0000 https://www.webpronews.com/?p=608196 Yum! Brands, home to renowned restaurant chains such as Taco Bell, Pizza Hut, KFC, and Habit Burger, is the world’s largest restaurant company. With a presence in 155 countries and more than 50,000 locations, managing and optimizing customer data is critical to their ability to serve millions of customers each day. At the helm of this data-driven revolution is Cameron Davies, Chief Data Officer (CDO) at Yum! Brands, who has been instrumental in transforming the company’s approach to data. In a recent conversation, Davies shared a deep dive into the company’s evolving customer data strategy, emphasizing the role of first-party data, the importance of technology partnerships, and how the company is positioning itself for the future.

The Strategic Role of Data in Global Operations

Yum! Brands’ data strategy is not just about marketing or customer engagement—it underpins the entire operational infrastructure of the business. Davies explains, “We look at data from three fundamental perspectives: easy operations, easy experiences, and easy intelligence. The goal is to use data to simplify and improve everything from supply chain management to customer interactions.”

The concept of “easy operations” is critical for a global company like Yum! Brands. Davies elaborates that his team is responsible for helping restaurant operators make data-driven decisions on a day-to-day basis. “We’re leveraging AI and machine learning to determine everything from how much food we should order, to how fast we should cook it, and even how much we should cook in real-time,” he says. This granular use of data has significant implications for reducing waste, improving service speed, and ensuring consistency across thousands of locations worldwide.

The “easy experiences” pillar focuses on how Yum! Brands can use data to improve customer journeys. “Are we remembering your preferences? Are we getting you relevant offers that resonate with your tastes? Are we ensuring that these experiences translate when you move from a digital platform to a physical restaurant?” asks Davies. He stresses that data is at the core of creating seamless, omnichannel experiences for customers, especially as consumer expectations for personalization continue to rise. For Davies, this is where data becomes truly transformational, enabling the company to deliver on the promise of a more connected, convenient, and personalized dining experience.

“Data is not just about operational efficiencies; it’s about enhancing the experience for both the customer and our employees in the restaurants,” says Davies. “The intelligence we derive from our data allows us to anticipate needs, personalize offers, and ultimately, build deeper relationships with our customers.”

First-Party Data: Unlocking a Valuable Resource

Yum! Brands’ journey into data transformation began with a realization about the value of its first-party data. “When you start asking yourself, ‘How much first-party data do we actually have?’ you go in, look, and say, ‘Holy smokes!’” Davies recalls. The amount of customer data across the company’s four major brands—Taco Bell, Pizza Hut, KFC, and Habit Burger—was staggering. This data was not just vast but also unique in its potential to deliver actionable insights.

“Think about it,” says Davies, “when someone orders a pizza, they’re willing to give us a lot of personal information if it means getting their pizza delivered on time and hot. There’s a natural value exchange in our business that is not as prevalent in other industries.” This direct interaction with customers has allowed Yum! Brands to accumulate a treasure trove of data, from purchase histories to location preferences, all of which can be used to enhance customer experiences.

Yet, having access to first-party data is only part of the equation. The challenge, as Davies points out, lies in effectively using that data. “From an operations perspective, we’ve been doing pretty well. But from a forward-thinking, one-to-one digital marketing perspective, we realized that we were not leveraging our data as effectively as we could be,” he admits. This gap between data collection and data activation spurred Yum! Brands to embark on a journey of transformation, focused on optimizing the way it uses customer data to drive personalized marketing and operational efficiencies.

Choosing the Right Customer Data Platform (CDP)

For any enterprise-level organization, choosing the right Customer Data Platform (CDP) is a pivotal decision. It was no different for Yum! Brands, which undertook a rigorous process to select a partner that could meet its unique requirements. “When we started looking for a CDP, it wasn’t just about commercials or functionality. It was about finding a partner who could go on this journey with us,” says Davies. The right CDP partner, according to Davies, is not just a vendor but an extension of the organization’s data strategy, one that can adapt and grow alongside the business.

This philosophy of partnership led Yum! Brands to select Treasure Data as its CDP provider. “There were a lot of good companies out there, but we were looking for something more than just a product. We wanted a partner who understood the complexities of working in a franchisee environment and who could collaborate with us in a meaningful way,” Davies notes. The ability to work closely with franchisees is crucial for Yum! Brands, as the company operates on a decentralized model where individual franchisees often have different needs and challenges. “At Yum! Brands, we like to use the term ‘taking people with you,’ because we can’t just dictate solutions from the top down. We have to bring our franchisees along on the journey,” says Davies.

This approach to collaboration was essential in the decision-making process. Davies emphasizes that the partnership with Treasure Data has allowed Yum! Brands to maintain flexibility while pursuing its long-term goals. “We’ve had to flex, but that’s what a journey is all about—it’s never a straight line. We need partners who are willing to adapt as we move forward,” he explains. This adaptability is particularly important in an environment as dynamic as the restaurant industry, where consumer behaviors can shift rapidly, and operational demands can vary widely by region.

Navigating the Complexities of a Global Franchise

One of the most unique aspects of Yum! Brands’ data strategy is its global franchise model, which introduces an additional layer of complexity when it comes to data integration and utilization. “Operating in a franchisee environment is fundamentally different from a corporate-owned model,” says Davies. “You don’t just implement changes overnight. You have to bring your franchisees along on the journey, helping them see the value of the new data tools and platforms.”

For Davies and his team, this means constant collaboration, both internally and with external partners like Treasure Data. “We call it ‘taking people with you’ because it’s about moving everyone in the same direction. I can’t tell a franchisee to do something—they have to want to do it themselves,” he explains. This collaborative approach has been essential in aligning the company’s broader data strategy with the needs and priorities of individual franchisees.

Davies notes that one of the keys to making this model work is clear communication and flexibility. “It’s not about dictating a solution; it’s about listening, adjusting, and making sure that the strategy we’re implementing works for everyone,” he says. This decentralized approach to data management allows Yum! Brands to be both agile and responsive, ensuring that its data strategy is adaptable to the unique challenges of each market and franchise.

A Data-Driven Transformation

As Yum! Brands continues to build out its customer data strategy, Davies is optimistic about the future. “We’ve come a long way, but there’s still so much potential to unlock,” he says. The company’s focus on first-party data, combined with its commitment to collaboration and innovation, positions it as a leader in the restaurant industry’s digital transformation. “We’ve got some really good data,” says Davies. “Now it’s about using it effectively to deliver on our customer promise and to create better, more personalized experiences for each of our customers.”

For Chief Data Officers at enterprise organizations, Yum! Brands’ journey offers valuable lessons in how to approach data transformation at scale. From the importance of choosing the right technology partners to navigating the complexities of a franchise model, Yum! Brands is demonstrating how a thoughtful, data-driven strategy can drive both operational efficiencies and enhanced customer experiences.

As Davies puts it, “This isn’t just about technology; it’s about leadership. It’s about taking people with you, understanding their needs, and building a strategy that works for everyone.” For Yum! Brands, the journey has only just begun, but with a clear focus on collaboration and customer experience, the company is well-positioned to continue leading the way in the evolving world of data-driven business.

]]>
608196
MoneyGram Data Breach Is Worst-Case Scenario https://www.webpronews.com/moneygram-data-breach-is-worst-case-scenario/ Tue, 08 Oct 2024 18:16:35 +0000 https://www.webpronews.com/?p=609337 MoneyGram has notified users of a data breach, and initial details suggest the breach is is about as bad as it could possibly be.

MoneyGram issues a notice on Monday of a data breach. On September 27, 2024, the company discovered “that an unauthorized third party” had access the company’s systems between September 20 and 22. The company detailed the data that was stolen and—spoiler alert—it’s worst-case scenario.

The impacted information included certain affected consumer names, contact information (such as phone numbers, email and postal addresses), dates of birth, a limited number of Social Security numbers, copies of government-issued identification documents (such as driver’s licenses), other identification documents (such as utility bills), bank account numbers, MoneyGram Plus Rewards numbers, transaction information (such as dates and amounts of transactions) and, for a limited number of consumers, criminal investigation information (such as fraud). The types of impacted information varied by affected individual.

The company says it has already taken certain system offline, which is temporarily impacting its ability to serve its customers. MoneyGram is also working with law enforcement and cybersecurity experts.

In the meantime, MoneyGram recommends users “remain vigilant” to potential fallout from the breach.

We recommend that you remain vigilant for incidents of fraud and identity theft by reviewing account statements and monitoring your free credit reports. If you are in the U.S. and would like to check your credit report, you are entitled under U.S. law to one free credit report annually from each of the three nationwide consumer reporting agencies. U.S. residents can order a free credit report by visiting www.annualcreditreport.com or calling toll-free at 1-877-322-8228. The U.S. Reference Guide provides recommendations by the U.S. Federal Trade Commission on the protection of personal information. We also recommend that you remain alert for unsolicited communications involving your personal information.

MoneyGram is also offering impacted US customers identity and credit monitoring services, free of cost for two years.

In terms of data breaches, this one takes the cake, in terms of the impact it could have on consumers. Names, dates of birth, Social Security Numbers, government-issued IDs, bank account info, and transaction data give bad actors everything they need to open fraudulent accounts, not to mention gain access to existing accounts.

Only time will tell how such a devastating data breach occurred, but it’s safe to say this one is going to haunt MoneyGram and its customers for a long time.

]]>
609337
Linux Distro Reviews: Linux Mint 22 https://www.webpronews.com/linux-distro-reviews-linux-mint-22/ Mon, 07 Oct 2024 15:24:34 +0000 https://www.webpronews.com/?p=609298 The Linux Mint team released version 22 of the venerable distro, which includes several significant improvements for users.

Linux Mint is one of the most well-regarded Linux distros available. While it is often recommended for new users, it’s one of the few distros that equally serves new and veteran users.

The new version builds on that pattern, bringing a host of improvements.

Catch our conversation on Linux Mint 22!

 

A New Base

One of the biggest changes with version 22 is the new Ubuntu base. While the Mint team maintains a version, LMDE, that is based on Debian, the mainline edition is based on Ubuntu LTS.

Mint 22 is based on the latest Ubuntu 24.04, bringing all the benefits that come with it, including updated applications, better performance, improved security, and newer kernels.

A major example of security is how the new base handles Personal Package Archives (PPAs), a popular option for users to get the latest version of some packages, as Ubuntu’s Oliver Smith highlighted in a blog post.

PPAs are a critical tool for development, testing and customisation, enabling users to install software outside of the official Ubuntu archives. This allows for a great deal of software freedom but also comes with potential security risks due to the access they are granted to your OS. In Ubuntu 24.04 LTS, PPAs are now distributed as deb822-formatted.sources files with their signing key directly embedded into the file’s signed-by field. This establishes a 1:1 relationship between the key and the repository, meaning one key cannot be used to sign multiple repositories and removing a repository also removes its associated key. In addition, APT now requires repositories to be signed using stronger public key algorithms.

Similarly, the Ubuntu 24.04 base improves the security surrounding unprivileged user namespaces.

Another significant security enhancement is the restriction of unprivileged user namespaces. These are a widely used feature of the Linux kernel that provide additional security isolation for applications that construct their own sandboxes, such as browsers which would then use that space to execute untrusted web content. So far so good, however the ability to create unprivileged user namespaces can expose additional attack surfaces within the Linux kernel and has proven to be a step in a significant number of exploits. In Ubuntu 24.04 LTS, AppAmor is now used to selectively control access to unprivileged user namespaces on a per application basis so that only applications with legitimate need can leverage this functionality.

Linux Mint benefits from these, and many other, improvements thanks to the rebase.

Linux Mint-Specific Improvements

In addition to the improvements Mint 22 inherits from Ubuntu, the team has also made some improvements that are unique to the distro.

Application Changes

Some of these involve rolling back changes that go against Mint’s philosophy.

For example, Ubuntu uses the Gnome desktop environment (DE), which has made it increasingly difficult to theme apps. In contrast, the ability to theme one’s desktop and apps is a core value of the Mint team and showcased in their homegrown Cinnamon DE.

Team leader Clément (Clem) Lefèbvre outlined the project’s divergence with Gnome’s direction, as well as Ubuntu’s continued dependence on Snaps.

An updated package base doesn’t just bring new technology, it can sometimes also threaten existing features.

Thunderbird continues to be available in Linux Mint 22 as a native .deb package. Following the decision by Ubuntu to move it to Snap, Linux Mint is now responsible for packaging it.

With GNOME 46, libgoa/libgoa-backend 3.50 moved to GTK4 and could no longer be used by GTK3 applications. This meant that Online Accounts support had to disappear from Cinnamon, Budgie and Unity. The XApp project implemented a standalone application called “GNOME Online Accounts GTK”. Not only did this bring the feature back in these three desktop environments, it also made it possible for it to be used in MATE and Xfce.

In Ubuntu 24.04, a number of GNOME applications moved to libAdwaita and stopped supporting the system theme.

Since selecting a theme is a core part of the desktops shipped by Linux Mint (Cinnamon, MATE and Xfce), apps are required to support it.

As a result, the GNOME Font Viewer was removed and the following applications were downgraded back to GTK3 versions: Celluloid, GNOME Calculator, Simple Scan, Baobab, System Monitor, GNOME Calendar, File Roller, Zenity.

Linux Mint 22 Online Accounts – Credit Linux Mint

Security

Mint 22 also hides unverified Flatpaks by default in the Software Manager, requiring users enable the option to see them. Although Flatpak, and the Flathub repo, are generally considered pretty safe, this measure is designed to protect newer users, while still giving experienced users the option to enable them.

Kernels

Linux Mint 21 stayed on the same kernel 5.15 series throughout its two-year life cycle. While users could manually upgrade the kernel post-install, the older kernel meant that Mint 21 could not be installed on some newer harder. To solve the problem, the Mint team maintained the Edge installation ISO, which was identical to the standard Mint 21, except that it included a newer kernel by default.

With Linux Mint 22, the team has decided to follow Ubuntu’s kernel release and adopt the HWE kernel. HWE refers to Ubuntu’s Hardware Enablement stack that updates LTS releases with the latest kernel and Mesa graphics drivers. Rather than sticking with the original 6.8 kernel that Linux Mint 22 shipped with, the Mint team will adopt the HWE kernel updates when they become available, eliminating the need for an Edge version altogether.

The change in kernel strategy should help keep Mint current with newer hardware, and eliminate one of the biggest complaints critics leveled against the distro.

Cinnamon 6.2

The Cinnamon DE is already one of the best DEs, in terms of offering a near perfect blend of features, stability, and simplicity. The DE is my personal favorite, offering the best of Gnome and KDE, without the annoyances of either.

The version included with Linux Mint 22 includes a number of enhancements.

  • Nemo actions can be organized neatly thanks to a new Layout Editor.
  • The Nemo actions layout editor
  • Separators and submenus can be added.
  • Labels and icons can be overridden to tune actions to your liking in your context menu.
  • Cinnamon 6.2 also features many bug fixes, performance improvements and the following changes:
  • Less printer added notifications (silenced for 2 hours)
  • Wayland support: Clutter polkit agent
  • Spices: keybindings support
  • Better avatar support in polkit agent and user applet
  • Workspace switcher: middle click removes the workspace being hovered
  • Keybindings: ability to search by binding
  • Cornerbar applet: shift+click action added
  • Applets: improved precision in reporting VPN and battery states

The team is continuing work on Wayland support. While it is certainly improved over the initial release, Wayland support is still experimental, with the team targeting 2026 for stable experience.

Nemo Actions – Credit Linux Mint

Read More: Linux Mint vs LMDE: Which Should You Choose?

Daily Usage

I’ve been using Mint 22 since mid-September, having spent the previous year using LMDE, and several months of using Linux Mint 21 before that. Overall, I can easily say that this is the best version of Mint I have used to date—and Mint and LMDE were already our highest-rated Linux distros in this entire series.

There are a number of things that have led to that conclusion, at least in my experience.

Newer Kernel

The fact that Mint 22 ships with kernel 6.8, instead of LMDE’s 6.1, means it is better suited to my hardware, especially my main machine, a Tuxedo Pulse with the AMD Ryzen 7 4800H. The chip is still new enough that AMD improvements to the Linux kernel directly impact that this chipset.

On older kernels, and especially the 6.1 series, I randomly experienced an issue where my computer would refuse to fully wake from sleep. The keyboard would light up, the screen would go from inactive to backlit black, but no image would appear and the machine would not respond to input. Because it was random, it was nearly impossible to diagnose, and there was never anything in the logs providing a clear indication of what was happening.

There have been several AMD-supplied additions to subsequent versions of the Linux kernel designed specifically to address power supply issues with various AMD chips, including mine.

While it is true that LMDE can use Debian backports and, therefore, currently has access to kernel 6.10, backports don’t receive the same support from the Debian Security Team, meaning backported kernels are not as safe as the standard one included with the release.

In contrast, not only is the 6.8 kernel included with Mint obviously supported by Ubuntu’s security team, but all HWE kernels that are released later will also have Ubuntu’s full support. As a result, Ubuntu currently provides a safer way to have a newer kernel that is more compatible with my specific hardware.

Newer Packages

While Flatpak has mitigated much of the outdated package trope that critics level against Debian, it doesn’t completely solve the problem. For example, Geany is my preferred text editor, but LMDE/Debian is still running the older 1.3.x series. While the latest 2.0 version is available via Flatpak, the Flatpak maintainer has said he may stop maintaining the package. In contrast, Linux Mint 22/Ubuntu 24.04 has the latest version, which brings a number of major improvements over 1.3.x.

This repeats across several applications. While backports are a valid option on LMDE, not all applications are available via backports. The newer version of Geany, for example, has not been backported.

While it is true that the situation will likely reverse in 2025, when Debian 12 and LMDE 7 are released, for right now Mint 22 offers a significantly newer package base and it’s been a pleasure to use it. What’s more, thanks to PPAs and Ubuntu’s habit of updating some apps mid-release, Ubuntu-based distros generally have a bit more support for finding the latest versions of apps if you need them.

Performance

Much like newer packages, this is one that may reverse when Debian 13/LMDE 7 comes out, but Mint 22 does have a performance edge over Debian 12/LMDE 6, at least in my experience. It’s not a major advantage advantage, it is there.

Given that Mint 22 is a year newer than LMDE 6, that performance edge is not surprising. But Ubuntu 24.04 also included a lot of performance-enhancing improvements over previous versions, so it will be interesting to see if Debian 13 leapfrogs it significantly (like Debian 12 did Ubuntu 22.04), or if Ubuntu 24.04 will hold its own against the newer Debian next year.

Spit and Polish

While LMDE is an outstanding distro, and my personal favorite, there’s no denying that Mint 22 has a bit more spit and polish that tracks with it being the main focus of the Mint team. LMDE currently only accounts for roughly 11% of the Mint user base, meaning just under 90% of the base is using the Ubuntu-based version of Mint.

For the most part, the two Mint distros are nearly identical, aside from Ubuntu-specific tools. For example, the Driver Manager and Kernel Manager included with mainline Mint are Ubuntu tools and there is no easy way to incorporate them into LMDE.

Even beyond Ubuntu-specific things, small things demonstrate the Mint team’s main area of focus. For example, Debian 12 introduced a bug where network notification dialogs do not respect the “Do not show this message again” option. While it is true that this is a Debian-specific bug, not a Mint one, the Mint team is well-known for their ability to smooth out Ubuntu-specific bugs and issues in mainline Mint. That specific bug persists in LMDE 6, however. To be clear, this is by no means a deal-breaker. In fact, the issue is very easy to fix. However, the fact that the Mint team has not fixed it—while fixing Ubuntu-specific annoyances on mainline Mint—shows where their priorities are.

Given that nearly 90% of their user base is on the mainline edition, there’s certainly no faulting the devs for that approach. Clem has repeatedly stated that LMDE exists as a proof of concept and a safety net, as well as a way to test the distro’s homegrown X Apps on the wider Debian base, not as a prime goal.

Clem addressed the issue himself in the comments pertaining to the release of LMDE 6.

I’ve nothing bad to say about 22.04. I hope Ubuntu continues to be as good going forward and doesn’t neglect its APT package base. If we don’t have a reason to transition we won’t. Ubuntu is still the best APT package base out there in our opinion. LMDE is there as a potential solution, but it is not a goal in itself.

Conclusion

For the past year, I have been a diehard LMDE user. To be fair, it’s still my favorite version of Mint and, therefore, my favorite distro, period. I like a Debian base, and I like that LMDE is a community project based on a community project, as opposed to Mint 22 being a community project based on a commercial distro.

Nonetheless, my experience with Mint 22 has been so good that I’m staying with it, despite preferring LMDE. I find Ubuntu’s kernel strategy and Mesa updates more in line with my hardware needs at the moment. And there’s no denying the benefit of using the version of the distro that is the main focus of its maintainers, as opposed to using what is essentially a side project.

What about the future? It’s hard to say what the future holds, either for my own usage or for Mint in general. Some believe the Mint team will eventually be forced to switch to LMDE as the mainline edition if Ubuntu continues its “Snapification” of more and more internal components and makes other decisions the Mint team disagrees with.

In fact, Clem has directly acknowledged Ubuntu’s increased usage of Snaps, saying he doesn’t believe the trend will continue.

It’s something we keep an eye on and invest time in, that’s true. It could potentially lead to a switch, it’s hard to say “always/never” because as you said it depends on what we’re dealing with. Realistically though I don’t think Snap will last forever. I see it getting abandoned just like Mir or Unity when it fails to get the traction and return on investment Canonical wants from it.

If he is correct, then the Mint team will likely never move off of the Ubuntu base, nor should they, given the benefits it provides.

If, on the other hand, LMDE does become the mainline edition, I will happily go back to knowing it has the team’s main focus and attention. For that matter, there’s at least a chance I may switch back to LMDE 7 next year, since that version will have a much newer official kernel that works with my hardware better than 6.1 in LMDE 6.

In the meantime, despite my preference for LMDE, I am thoroughly enjoying my time on Mint 22 and have no intention of leaving it for the foreseeable future.

Rating

5 out of 5 stars

]]>
609298
Cloudflare Defeats Another Patent Troll, Forces Troll to Free Patents https://www.webpronews.com/cloudflare-defeats-another-patent-troll-forces-troll-to-free-patents/ Fri, 04 Oct 2024 16:37:02 +0000 https://www.webpronews.com/?p=609231 Cloudflare is continuing its crusade against patent trolls, defeating Sable and forcing it to free its patents by dedicating them to the public.

Cloudflare is one of the few companies that welcomes fights with patent trolls, even setting up Project Jango as a way of incentivizing users to help it fight such companies. Its latest legal battle was with Sable, a patent troll that sued Cloudflare in 2021 and tried to make the case that it was infringing multiple of Sable’s patents.

Catch our conversation on Cloudflare’s crusade against patent trolls!

 

Cloudflare immediately tapped into Project Jango, rewarding users who help it find “prior art,” or applications of the tech covered by Sable’s patents in use before Sable’s patents went into effect. Thanks in no small part to this effort, by the time the case went to trial, there was only one remaining claim against a single patent—down from the 100 claims against four patents when Sable initially sued Cloudflare.

The Challenges Cloudflare Faced

One of the biggest challenges Cloudflare faced was not the merits of the case, but explaining very technical terms to a non-technical jury, as the company explains.

To defeat Sable’s claim of infringement we needed to explain to the jury — in clear and understandable terms — why what Cloudflare does is different from what was covered by claim 25 of Sable’s remaining patent, U.S. Patent No. 7,012,919 (the ’919 patent). To do this, we enlisted the help of one of our talented Cloudflare engineers, Eric Reeves, as well as Dr. Paul Min, Senior Professor of Electrical & Systems Engineering at Washington University, an expert in the field of computer networking. Eric and Dr. Min helped us explain to the jury the multiple reasons we didn’t infringe.

While Sable tried its best to convince the jury otherwise, Cloudflare’s experts were able to clearly demonstrate how the CDN provider’s tech was a unique home-grown invention, and not a ripoff of Sable’s patent.

While Sable’s technical expert tried his hardest to convince the jury that various software and hardware components of Cloudflare’s servers constitute “line cards,” his explanations defied credibility. The simple fact is that Cloudflare’s servers do not have line cards.

Ultimately, the jury understood, returning a verdict that Cloudflare does not infringe claim 25 of the ‘919 patent.

Going Further to Invalidate Sable’s Claim

Cloudflare wasn’t happy with just winning the infringement claim. Instead,the company wanted to go further and invalidate the claim altogether, despite the challenge in doing so.

Proving invalidity to a jury is hard. The burden on the defendant is high: Cloudflare needed to prove by clear and convincing evidence that claim 25 is invalid. And, proving it by describing how the claim is obvious in light of the prior art is complicated.

To do this, we again relied on our technical expert, Dr. Min, to explain how two prior art references, U.S. Patent No. 6,584,071 (Kodialam) and U.S. Patent No. 6,680,933 (Cheeseman) together render claim 25 of the ’919 patent obvious. Kodialam and Cheeseman are patents from Nortel Networks and Lucent relating to router technology developed in the late 1990s. Both are prior art to the ’919 patent (i.e., they pre-date the priority date of the ’919 patent), and when considered together by a person skilled in the area of computer engineering and computer networking technology, they rendered obvious the so-called invention of claim 25.

Ultimately, Cloudfare’s efforts paid off, with a jury agreeing that Sable’s claim was invalid.

Cloudflare Win Over Sable – Credit Cloudflare

The Repercussions for Sable

The case ended up being a total loss for Sable on all counts. Throughout the case, it was apparent that Sable was motivated strictly by trying to achieve an easy payday, which it was denied. Even more fitting, Sable ended up paying Cloudflare, as well as giving up any possibility of weaponizing its patents again.

In the end, Sable agreed to pay Cloudflare $225,000, grant Cloudflare a royalty-free license to its entire patent portfolio, and to dedicate its patents to the public, ensuring that Sable can never again assert them against another company.

Let’s repeat that first part, just to make sure everyone understands:

Sable, the patent troll that sued Cloudflare back in March 2021 asserting around 100 claims across four patents, in the end wound up paying Cloudflare. While this $225,000 can’t fully compensate us for the time, energy and frustration of having to deal with this litigation for nearly three years, it does help to even the score a bit. And we hope that it sends an important message to patent trolls everywhere to beware before taking on Cloudflare.

Why Cloudflare Deserves An Award

Patent trolls are the bane of the tech industry’s existence. To be clear, there is nothing wrong with a company aggressively defending its intellectual property when it puts in the time and money to develop a technology, or purchases a tech and its patents, and goes on to use them.

In the case of patent trolls, however, these are companies that file vague, overly-broad patents to cover any number of concepts that may one day be used by some company. Or a patent troll will purchase a promising patent from an inventor. In both cases, however, the patent troll has no intention of developing any technology that may be covered by the patent. Their sole goal is to wait for another company to develop such tech and then shake them down for money.

Patent trolls are the worst of the worst, artificially inflating the cost of developing tech with their greedy shakedowns and lawsuits.

Cloudflare, with its long history of relentlessly going after paten trolls, deserves an award.

]]>
609231
FCC Unveils $200 Million Program to Secure Schools and Libraries https://www.webpronews.com/fcc-unveils-200-million-program-to-secure-schools-and-libraries/ Thu, 03 Oct 2024 16:18:27 +0000 https://www.webpronews.com/?p=609181 The Federal Communications Commission has unveiled the Schools and Libraries Cybersecurity Pilot Program, dedicating $200 million to the cause.

Schools and libraries are some of the most vulnerable cybersecurity targets, largely because they often lack the budget to employ the necessary professionals to protect their organizations from threats. The FCC is hoping to address that situation with its Pilot Program.

Catch our conversation on the FCC’s program to secure schools and libraries!

 

Modeled after the FCC’s Connected Care Pilot, the Pilot Program will evaluate the effectiveness of using Universal Service funding to support cybersecurity services and equipment to protect school and library broadband networks and data in order to determine whether to fund them on a permanent basis.

The program will allow participant schools and libraries seek reimbursement for eligible cybersecurity expenses.

Pilot Program participants will be eligible to seek reimbursement for a wide variety of cybersecurity services and equipment, subject to an overall cap. Eligible services and equipment include: Advanced/Next Generation Firewalls; Endpoint Protection; Identity Protection and Authentication; and Monitoring, Detection, and Response

The FCC said it will prioritize facilities based on the populations that are most in need of cybersecurity support.

To facilitate the inclusion of a diverse set of Pilot projects and to target Pilot funds to the populations most in need of cybersecurity support, the FCC will award support to a combination of large and small and urban and rural schools, libraries, and consortia, with an emphasis on funding proposed Pilot projects that include low-income and Tribal applicants.

Once schools and libraries are accepted into the Pilot Program, they will receive a letter informing them of their inclusion and can begin submitting reimbursement requests.

Applicants selected to participate in the Pilot Program will be announced by Public Notice. The Public Notice will provide additional information regarding next steps, including the process for soliciting bids and procuring desired cybersecurity services and equipment. After participants complete a competitive bidding process, they will submit requests for services and, upon approval, they will receive a Funding Commitment Decision Letter (FCDL) approving or denying their funding requests.

Once an FCDL is issued and the delivery of services has started, participants and service providers may submit requests for reimbursement from the Pilot Program. If necessary, participants can request reimbursement and request certain changes to their funding requests from the Universal Service Administrative Company (USAC), the Pilot Program administrator.

Given the rise of cybersecurity threats targeting non-commercial entities, the FCC’s Pilot Program is sure to help provide a lifeline to some of the most vulnerable organizations.

]]>
609181
Microsoft Defender Adds Insecure Wi-Fi Network Protection https://www.webpronews.com/microsoft-defender-adds-insecure-wi-fi-network-protection/ Tue, 01 Oct 2024 17:33:54 +0000 https://www.webpronews.com/?p=609089 Microsoft Defender is expanding its protection, adding the ability to protect users when they connect to insecure Wi-Fi networks.

Free Wi-Fi networks are offered by businesses of all sizes, but those networks can often pose serious threats to users’ security and privacy. Any number of attacks, including man-in-the-middle attacks, evil twin attacks, and data theft are just a few of the risks insecure networks pose.

Catch our chat on Microsoft Defender’s new Wi-Fi protection feature!

 

Microsoft Defender, the company’s cybersecurity app, is add features to protect users for those times when they need to access public Wi-Fi networks. The company already added VPN support, since VPNs are one of the go-to solutions to keep data safe on an insecure network.

The company is adding the following features:

  • Auto detection and notification of unsecure Wi-Fi connections with the ability to turn on a virtual private network (VPN) in the Defender app for added safety
  • Privacy protection (VPN) is now available on all our supported device platforms including Windows, macOS, Android, and iOS.
  • Feature availability in more countries including US, UK, Germany, and Canada. And more countries are coming soon. We’re adding privacy protection to ten additional countries5 in Europe, Asia, and LATAM regions soon.

The suspicious Wi-Fi detection, in particular, will go a long way toward keeping users safe.

We’ve added detection for un-safe Wi-Fi (suspicious Wi-Fi). These detections are possible using Defender heuristics that examine multiple characteristics of a Wi-Fi hotspot to determine if it is suspicious. As with unsecure Wi-Fi, you get a notification for un-safe Wi- Fi as well and can turn on Defender VPN for added safety.

Microsoft has committed to revamping its security after a string of embarrassing breaches. It’s good to see the company displaying an equal level of concern for keeping users safe and secure.

]]>
609089
FCC Fines T-Mobile, Forces Company to Improve Cybersecurity https://www.webpronews.com/fcc-fines-t-mobile-forces-company-to-improve-cybersecurity/ Tue, 01 Oct 2024 15:18:50 +0000 https://www.webpronews.com/?p=609085 The Federal Communications Commission announced a “groundbreaking data protection and cybersecurity settlement with T-Mobile,” fining the company and forcing changes to its operations.

T-Mobile has an atrocious record when it comes to cybersecurity, suffering multiple data breaches in recent years, some of which have impacted tens of millions of users. Hackers even bragged about accessing the company’s internal networks more than 100 times in 2022 alone. Despite settling several class-action cases for a whopping $350 million, the company has continued to struggle with cybersecurity.

Catch our chat on T-Mobile’s FCC fine over cybersecurity violations!

 

The FCC appears to have reached the limits of its patience, and is now forcing the company to do better.

The Federal Communications Commission today announced a groundbreaking data protection and cybersecurity settlement with T-Mobile to resolve the Enforcement Bureau’s investigations into significant data breaches that impacted millions of U.S. consumers. To settle the investigations, T-Mobile has agreed to important forward-looking commitments to address foundational security flaws, work to improve cyber hygiene, and adopt robust modern architectures, like zero trust and phishing-resistant multi-factor authentication. The Commission believes that implementation of these commitments, backed by a $15.75 million cybersecurity investment by the company as required by the settlement, will serve as a model for the mobile telecommunications industry. As part of the settlement, the company will also pay a $15.75 million civil penalty to the U.S. Treasury.

The settlement address multiple data breaches, including incidents from 2021-2023. The FCC acknowledged the carrier networks are prime targets for hackers, but that doesn’t excuse lapses in security. Instead, it only underscores the need for such companies to provide the best security possible.

“Today’s mobile networks are top targets for cybercriminals,” said FCC Chairwoman Jessica Rosenworcel. “Consumers’ data is too important and much too sensitive to receive anything less than the best cybersecurity protections. We will continue to send a strong message to providers entrusted with this delicate information that they need to beef up their systems or there will be consequences.”

As part of the agreement, T-Mobile agreed to the following:

  • Corporate Governance – T-Mobile’s Chief Information Security Officer will give regular reports to the board concerning T-Mobile’s cybersecurity posture and business risks posed by cybersecurity. This is a foundational requirement for all well-governed companies. Corporate boards need both visibility and cybersecurity domain experience in order to effectively govern. This commitment ensures that the board’s visibility into cybersecurity is a key priority going forward.
  • Modern Zero-Trust Architecture – T-Mobile has agreed to move toward a modern zero trust architecture and segment its networks. This is one of the most important changes organizations can make to improve their security posture.
  • Robust Identity and Access Management – T-Mobile has committed to broad adoption of multi-factor authentication methods within its network. This is a critical step in securing critical infrastructure, such as our telecommunications networks. Abuse of authentication methods, for example through the leakage, theft, or deliberate sale of credentials, is the number one way that breaches and ransomware attacks begin. Consistent application of best practice identity and access methods will do more to improve a cybersecurity posture than almost any other single change.

“The wide-ranging terms set forth in today’s settlement are a significant step forward in protecting the networks that house the sensitive data of millions of customers nationwide,” said Loyaan A. Egal, Chief of the Enforcement Bureau and Chair of the Privacy and Data Protection Task Force. “With companies like T-Mobile and other telecom service providers operating in a space where national security and consumer protection interests overlap, we are focused on ensuring critical technical changes are made to telecommunications networks to improve our national cybersecurity posture and help prevent future compromises of Americans’ sensitive data. We will continue to hold T-Mobile accountable for implementing these commitments.”

Hopefully the FCC’s actions send a clear message to all companies that they must protect the data customers entrust them with.

]]>
609085
Meta Fined $101 Million for Storing Passwords in Plain Text https://www.webpronews.com/meta-fined-101-million-for-storing-passwords-in-plain-text/ Sat, 28 Sep 2024 02:02:17 +0000 https://www.webpronews.com/?p=608983 Ireland’s Data Protection Commission (DPC) has fined Meta €91 million ($101.5 million) for committing the cardinal of cybersecurity—storing passwords in plain text.

Some of the worst data breaches have occurred because passwords were stored in plain text. Unfortunately, Meta doesn’t seem to have gotten the memo, with the company admitting in 2019 that it had stored passwords for hundreds of millions of users in plain text. The only redeeming element is that the files in question were apparently not accessible to anyone outside of Facebook, according to the company’s statement at the time.

Catch our chat on Meta’s $101M fine for plain text password storage!

 

To be clear, these passwords were never visible to anyone outside of Facebook and we have found no evidence to date that anyone internally abused or improperly accessed them.

While there’s no evidence the passwords were accessible externally, the fact the passwords were stored in plain text means there was always a risk they could have been exposed, by either a bad actor internally or via an external hack.

The DPC has reached its final decision after it began investigating Meta Platforms Ireland Limited (MPIL) in 2019. The investigation found that MPIL infringed on the GDPR in the following ways:

  • Article 33(1) GDPR, as MPIL failed to notify the DPC of a personal data breach concerning storage of user passwords in plaintext;
  • Article 33(5) GDPR, as MPIL failed to document personal data breaches concerning the storage of user passwords in plaintext;
  • Article 5(1)(f) GDPR, as MPIL did not use appropriate technical or organisational measures to ensure appropriate security of users’ passwords against unauthorised processing; and
  • Article 32(1) GDPR, because MPIL did not implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including the ability to ensure the ongoing confidentiality of user passwords.

As a result of the investigation, MPIL will be reprimanded and fined the $101.5 million.

This Decision of the DPC concerns the GDPR principles of integrity and confidentiality. The GDPR requires data controllers to implement appropriate security measures when processing personal data, taking into account factors such as the risks to service users and the nature of the data processing. In order to maintain security, data controllers should evaluate the risks inherent in the processing and implement measures to mitigate those risks. This decision emphasises the need to take such measures when storing user passwords.

The GDPR also requires data controllers to properly document personal data breaches, and to notify data protection authorities of breaches that occur. A personal data breach may, if not addressed in an appropriate and timely manner, result in damage such as loss of control over personal data. Therefore, when a controller becomes aware that a personal data breach has occurred, the controller should notify the supervisory authority without undue delay, in the manner prescribed by Article 33 GDPR.

“It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data,” said Graham Doyle, Deputy Commissioner at the DPC. “It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users’ social media accounts.”

DPC’s Meta Decision – Credit DPC
]]>
608983
Tor Project and Tails OS Have Merged https://www.webpronews.com/tor-project-and-tails-os-have-merged/ Thu, 26 Sep 2024 15:46:05 +0000 https://www.webpronews.com/?p=608937 The Tor Project announced it has merged with the Tails OS project, in an effort to improve collaboration, reduce overhead, and improve users’ access to freedom-preserving options.

Tor is the leading privacy option for users trying to circumvent surveillance, designed from the outset to route traffic through multiple encrypted servers, masking a user’s browsing activity even when a network is being monitored. Tails OS is a Debian-based Linux distro that is designed to be run 100% from a USB stick. As a result, an individual can temporarily use any computer as their own by booting off of the USB stick, leaving not trace behind when they power the machine down and leave.

Catch our chat on the big merger between Tor Project and Tails OS!

 

The projects are joining forces to pool their resources and make privacy-preserving tools more readily available to at-risk individuals, as well as average users.

Countering the threat of global mass surveillance and censorship to a free Internet, Tor and Tails provide essential tools to help people around the world stay safe online. By joining forces, these two privacy advocates will pool their resources to focus on what matters most: ensuring that activists, journalists, other at-risk and everyday users will have access to improved digital security tools.

In late 2023, Tails approached the Tor Project with the idea of merging operations. Tails had outgrown its existing structure. Rather than expanding Tails’s operational capacity on their own and putting more stress on Tails workers, merging with the Tor Project, with its larger and established operational framework, offered a solution. By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project.

The merger builds on 15 years of collaboration and solidarity between the two projects, but will allow Tails to tap into Tor’s resources.

“Running Tails as an independent project for 15 years has been a huge effort, but not for the reasons you might expect. The toughest part wasn’t the tech–it was handling critical tasks like fundraising, finances, and HR. After trying to manage those in different ways, I’m really relieved that Tails is now under the Tor Project’s wing. In a way, it feels like coming home,” says intrigeri, Team Lead Tails OS, The Tor Project.

A History of Collaboration

Merging the two projects will expand Tor’s focus, allowing it to address privacy and security issues beyond just the web browser.

Whether it’s someone seeking access to the open web or facing surveillance, Tor and Tails offer complementary protections. While Tor Browser anonymizes online activity, Tails secures the entire operating system–from files to browsing sessions. For journalists working in repressive regions or covering sensitive topics, Tor and Tails are often used as a set to protect their communications and safeguard their sources. The merger will lead to more robust treatment of these overlapping threat models and offer a comprehensive solution for those who need both network and system-level security in high-risk environments.

It will also open up broader training and outreach opportunities. Until now, Tor’s educational efforts have primarily focused on its browser. With Tails integrated into these programs, we can address a wider range of privacy needs and security scenarios. Lastly, this merger will lead to increased visibility for Tails. Many users familiar with Tor may not yet know about Tails OS. By bringing Tails within the Tor Project umbrella, we can introduce this powerful tool to more individuals and groups needing to remain anonymous while working in hostile environments.

Joining Forces a Win for Users

“Joining Tor means we’ll finally have the capacity to reach more people who need Tails. We’ve known for a long time that we needed to ramp up our outreach, but we just didn’t have the resources to do so,” intrigeri.

“By bringing these two organizations together, we’re not just making things easier for our teams, but ensuring the sustainable development and advancement of these vital tools. Working together allows for faster, more efficient collaboration, enabling the quick integration of new features from one tool to the other. This collaboration strengthens our mission and accelerates our ability to respond to evolving threats,” says Isabela Fernandes, Executive Director, The Tor Project.

The announcement is good news for the privacy community, and will be a major help to journalists, activists, and other at-risk groups who depend on such software for their life and work.

]]>
608937
Microsoft Deprecates Windows Server Update Services https://www.webpronews.com/microsoft-deprecates-windows-server-update-services/ Wed, 25 Sep 2024 22:12:45 +0000 https://www.webpronews.com/?p=608929 Join our chat on Microsoft’s decision to retire Windows Server Update Services!

 

Microsoft has dropped some unwelcome news for system admins, with the company announcing it is deprecating the Windows Server Update Services (WSUS) feature.

Microsoft made the announcement as part of a list of features that have been removed or deprecated in the Windows Server 2025 preview.

Windows Server Update Services (WSUS) is no longer actively developed, all the existing capabilities and content continue to be available for your deployments.

The company’s Nir Froimovici said the move was made in an effort to simplify Windows management.

As part of our vision for simplified Windows management from the cloud, Microsoft has announced deprecation of Windows Server Update Services (WSUS). Specifically, this means that we are no longer investing in new capabilities, nor are we accepting new feature requests for WSUS. However, we are preserving current functionality and will continue to publish updates through the WSUS channel. We will also support any content already published through the WSUS channel.

Needless to say, the news is not sitting well with some admins. Eric Siron, a Microsoft MVP, acknowledged that WSUS has not received much love in recent years, but said deprecating it didn’t seem like the right solution.

Agreed that WSUS is a horrifically underdeveloped nightmare. But, this is not the answer. The answer is modernizing WSUS or replacing it. There’s nothing wrong with having better tools in Azure with an attached price tag. The problem comes from emptying the niche that WSUS occupies.

People need to stop thinking about this as, “I will approach this news on my systems with…” That’s not the problem. Of course, you will come up with a solution that works, and of course you will keep your systems patched. That’s not the point.

Siron points out the potential security implications of WSUS being deprecated, and the increased risk sensitive information will become vulnerable to hackers.

Realize right now that there is a 100% chance that one or more of these organizations has your personal information, credit card numbers, health records, all kinds of things. As soon as WSUS goes away, there’s a 100% chance that your data will wind up on a system that the organization didn’t want to pay to patch, somebody in subordinate.IT.company failed to properly beg someone in IT.company to patch, or OOPSIE somebody didn’t check the monthly patch result on. The risk is bad enough with WSUS. Again, look up Melissa and SQL Slammer. I forgot MSBlast, that one had a patch available before it was ever exploited, too, and still caused all kinds of drama. Anyway, the point is that it doesn’t have to be a system that you’re responsible for to become your problem.

The end of WSUS is a gift to attackers.

It’s clear that Microsoft wants to move people to Azure and its cloud services, but deprecating something like WSUS without providing a replacement solution may end up causing significant headaches down the road.

]]>
608929
Google Files EU Antitrust Complaint Against Microsoft https://www.webpronews.com/google-files-eu-antitrust-complaint-against-microsoft/ Wed, 25 Sep 2024 17:49:05 +0000 https://www.webpronews.com/?p=608914 Google has filed an antitrust complaint against Microsoft with the EU Commission, alleging the Redmond company is engaging in anti-competitive cloud licensing practices.

Amit Zavery, Google Cloud GM/VP and Head of Platform, and Gloud EMEA President Tara Brady penned a blog post outlining Google’s argument. The two executives make the case that the old ways of locking customers in to a single vendor—which may have worked in the pre-cloud software industry—are no longer viable or beneficial in an era defined by cloud computing. In spite of that, the execs point out multiple instances where Microsoft has continued the old ways.

Don’t miss our talk on Google’s EU complaint against Microsoft!”

 

For years, in the productivity software space, Microsoft has locked customers into Teams, even when they preferred other providers. Now, the company is running the same playbook to push companies to Azure, its cloud platform. Microsoft’s licensing terms restrict European customers from moving their current Microsoft workloads to competitors’ clouds – despite there being no technical barriers to doing so – or impose what Microsoft admits is a striking 400% price markup.

To make matters worse, not only is Microsoft the only major cloud provider to still be trying to leverage vendor lock-in, but its attempts to do so are having significant negative effects, most notably in the realm of cybersecurity.

Microsoft is the only cloud provider to use these tactics, which have significantly harmed European companies and governments. Not only have they cost European businesses at least €1 billion a year, but also they have led to adverse downstream effects, including waste of tax funds, stifled competition, restrictions on distributors and channel partners, and heightened risk for organizations exposed to Microsoft’s “inadequate” security culture.

The execs specifically call out the roll Windows Server plays in Microsoft’s efforts to lock customer in to its Azure platform. Windows Server is a staple for much of the industry, and companies have long used their Windows Server licenses on the hardware of their choice. As cloud computing began to eclipse on-premise workflows, Microsoft saw a serious threat to its Windows Server business and reacted accordingly.

However, as cloud computing took off and promised to bring new benefits to European businesses, customers wanted to move their previously purchased licenses to other cloud providers, and in some cases to multiple clouds, to provide additional resiliency and security. Initially, Microsoft allowed them to do this. But as Azure faced more competition, Microsoft introduced new rules that severely limited customer choice.

One of the most significant restrictions occurred in 2019, when Microsoft adopted new licensing terms that imposed extreme financial penalties on businesses wanting to use Windows Server software on Azure’s closest competitors, such as Google Cloud and AWS. Microsoft’s own statements indicate that customers who want to move their workloads to these competitors would need to pay up to five times more. And for those who choose to keep running Windows Server on competitors’ cloud platforms (despite the cost difference), Microsoft introduced additional obstacles over the last few years, such as limiting security patches and creating other interoperability barriers.

The executives go on to highlight examples within the EU, including studies that show Microsoft’s practices lead to higher prices, reduced competition, and taxpayer waste, before highlighting how Google’s approach is different.

Google Cloud’s approach is different. We promote fair and transparent licensing for our customers. We pioneered a multi-cloud infrastructure service and a multi-cloud data warehouse, enabling workloads to run across multiple clouds. And we were the first company to provide digital sovereignty solutions for European governments and to waive exit fees for customers wishing to switch cloud providers.

Our point is a simple one: Restrictive cloud licensing practices hurt companies and impede European competitiveness. We look forward to continuing this discussion on how to keep the cloud market fair and open for European businesses and governments.

Google Is Trying to Revive a Settled Complaint

In some ways, Google is trying to revive a complaint that had already been made and settled—albeit by a different organization—without Google’s involvement.

In late 2022, the Cloud Infrastructure Service Providers in Europe (CISPE) filed a similar complaint against Microsoft, saying the company was “irreparably damaging” the EU cloud industry. In the complaint, CISPE made many of the same arguments as Google, saying Microsoft was unfairly locking customers in to its own platforms and abusing its dominance in some markets to prop up its cloud business.

“Leveraging its dominance in productivity software, Microsoft restricts choice and inflates costs as European customers look to move to the cloud, thus distorting Europe’s digital economy,” Francisco Mingorance, Secretary General of CISPE, said at the time. “DG Comp must act swiftly to open a formal investigation with a statement of objections against Microsoft’s software licence abuses to defend the robust cloud ecosystem Europe needs and deserves.”

In July 2024, Microsoft and CISPE struck a deal that settled the claim, giving EU cloud providers the ability to offer Microsoft and Azure services that previously were only available to direct Microsoft customers.

“This is a significant victory for European cloud providers,” said Secretary General Mingorance. “CISPE has given Microsoft the benefit of the doubt and believes that this agreement will provide a level playing field for European cloud infrastructure service providers and their customers. Microsoft has nine-months to make good on its commitment by offering solutions that allow fair licensing terms for its productivity software European cloud infrastructures.

“To ensure the continued primacy of European members, the CISPE General Assembly has also instructed the Board to revise the governance of the association to ensure that European businesses and SMEs remain in the driving seat for CISPE campaigns should Microsoft or other global hyperscalers ask to become members. Proposed modifications will be presented ahead of the next General Assembly on 18th October 2024.”

Interestingly, the terms stipulate that neither Google Cloud nor AWS can benefit from the agreement, and AWS was excluded from the negotiations altogether. Needless to say, Google was not happy with the deal, and Zavery expressed his company’s dismay shortly after.

MSFT playbook of paying off complainants rather than address their complaints shouldn’t fool anyone,” Zavery wrote on X. “The deal doesn’t apply to all CISPE members. CISPE admits to a payoff. EU cloud competitors become Azure customers. CISPE members under gag order, can’t file complaints anymore.”

Google clearly believes that Microsoft’s deal with CISPE does little to address the company’s underlying behavior or the harm such behavior allegedly causes the cloud industry. The fact that Google doesn’t benefit from the CISPE deal no doubt adds to the company’s motivation in filing its own complaint.

Microsoft has worked hard in recent years to separate itself from the rest of the tech industry, portraying itself as a company that is more open to working with regulators and promoting fair practices in the industry. Despite those efforts, Google clearly believes Microsoft has not changed enough and is trying to force its hand with the EU complaint.

]]>
608914
EU Votes Today On Controversial Effort to Destroy Private Messaging https://www.webpronews.com/eu-votes-today-on-controversial-effort-to-destroy-private-messaging/ Mon, 23 Sep 2024 19:44:01 +0000 https://www.webpronews.com/?p=608809 The European Union is voting today (September 23) on its controversial chat control legislation, a measure security and privacy experts warn will destroy private messaging in the bloc.

The EU has been engaged in a concerted effort to undermine privacy and security by trying to pass legislation that would force companies to break end-to-end encryption (E2EE). The bloc has proposed the use of “client-side scanning,” a technology that scans files on the devices and alerts the authorities if anything illegal is discovered.

Tune in as we dive into the EU’s vote that could spell the end of private messaging!

 

After previous efforts were shot down, the EU has relabeled “client-side scanning” as “upload moderation,” essentially an effort to force users to agree to client-side scanning if they want to be able to send or upload any media files via a messaging platform that otherwise features E2EE. “Upload moderation” is a clever way to essentially render E2EE moot, while still being able to technically tout support for strong encryption.

Signal President Meredith Whittaker called out the EU for its efforts, slamming the bloc for trying to pull a fast one on users, and ignoring the mathematical reality that there is no way to maintain secure and private communication while simultaneously trying to undermine or circumvent E2EE.

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Rhetorical games are cute in marketing or tabloid reporting, but they are dangerous and naive when applied to such a serious topic with such high stakes. So let’s be very clear, again: mandating mass scanning of private communications fundamentally undermines encryption. Full stop. Whether this happens via tampering with, for instance, an encryption algorithm’s random number generation, or by implementing a key escrow system, or by forcing communications to pass through a surveillance system before they’re encrypted. We can call it a backdoor, a front door, or “upload moderation.” But whatever we call it, each one of these approaches creates a vulnerability that can be exploited by hackers and hostile nation states, removing the protection of unbreakable math and putting in its place a high-value vulnerability.

We ask that those playing these word games please stop and recognize what the expert community has repeatedly made clear. Either end-to-end encryption protects everyone, and enshrines security and privacy, or it’s broken for everyone. And breaking end-to-end encryption, particularly at such a geopolitically volatile time, is a disastrous proposition.

Patrick Breyer–former Pirate Party Member of the European Parliament and co-negotiator of the European Parliament’s critical position on the proposal—says the EU is voting on the revised measure today and goes on to describe the issues such a measure will cause if it passes.

“Instead of empowering teens to protect themselves from sextorsion and exploitation by making chat services safer, victims of abuse are betrayed by an unrealistic bill that is doomed in court, according to the EU Council’s own legal assessment,” writes Breyer. “Flooding our police with largely irrelevant tips on old, long known material will fail to save victims from ongoing abuse, and will actually reduce law enforcement capacities for going after predators. Europeans need to understand that they will be cut off from using commonplace secure messengers if this bill is implemented – that means losing touch with your friends and colleagues around the world. Do you really want Europe to become the world leader in bugging our smartphones and mandating untargeted blanket surveillance of the chats of millions of law-abiding Europeans?”

“Regardless of the objective – imagine the postal service simply opened and snooped through every letter without suspicion,” Breyer adds. “It’s inconceivable. Besides, it is precisely the current bulk screening for supposedly known content by Big Tech that exposes thousands of entirely legal private chats, overburdens law enforcement and mass criminalises minors.”

The EU Acknowledges the Measure Is Privacy-Invasive

Interestingly, the EU does not even try to hide the fact that its proposed measures are the most privacy-invasive solution available to it.

The company described its solution in 2022:

At the same time, the detection process would be the most intrusive one for users (compared to the detection of known and new CSAM) since it would involve searching text, including in interpersonal communications, as the most important vector for grooming.

Even more telling is the fact that EU ministers want to make sure they are exempt from the chat control legislation, the most damning indication of all that the EU is aware of the privacy implications of its efforts.

“The fact that the EU interior ministers want to exempt police officers, soldiers, intelligence officers and even themselves from chat control scanning proves that they know exactly just how unreliable and dangerous the snooping algorithms are that they want to unleash on us citizens,” said Breyer. “They seem to fear that even military secrets without any link to child sexual abuse could end up in the US at any time. The confidentiality of government communications is certainly important, but the same must apply to the protection of business and of course citizens communications, including the spaces that victims of abuse themselves need for secure exchanges and therapy. We know that most of the chats leaked by today’s voluntary snooping algorithms are of no relevance to the police, for example family photos or consensual sexting. It is outrageous that the EU interior ministers themselves do not want to suffer the consequences of the destruction of digital privacy of correspondence and secure encryption that they are imposing on us.”

Why Is the EU Pushing for Chat Control?

Given the issues surrounding chat control, many may wonder why the EU is hell-bent on passing such legislation, especially when the bloc touts itself as pro-privacy.

In short, chat control is being promoted as a way to combat child sexual abuse material (CSAM). Unfortunately, while such a goal is certainly admirable, trying to tackle it with chat control legislation is problematic at best.

“Let me be clear what that means: to detect grooming’ is not simply searching for known CSAM. It isn’t using AI to detect new CSAM, which is also on the table. It’s running algorithms reading your actual text messages to figure out what you’re saying, at scale.” — Matthew Green (@matthew_d_green), May 10, 2022.

“It is potentially going to do this on encrypted messages that should be private. It won’t be good, and it won’t be smart, and it will make mistakes. But what’s terrifying is that once you open up ‘machines reading your text messages’ for any purpose, there are no limits.” — Matthew Green (@matthew_d_green), May 10, 2022.

Private messaging platform Threema further describes the issues:

Of course, sharing CSAM is an absolutely intolerable, horrific crime that has to be punished. Before CSAM can be shared online, however, a child must have suffered abuse in real life, which is what effective child protection should be trying to prevent (and what Chat Control does not focus on). For this and many other reasons, child protection organizations such as Germany’s Federal Child Protection Association are against Chat Control, arguing that it’s “neither proportionate nor effective.”

Besides, there’s no way of really knowing whether Chat Control would actually be (or remain) limited to CSAM. Once the mass-surveillance apparatus is installed, it could easily be extended to detect content other than CSAM without anyone noticing it. From a service provider’s point of view, the detection mechanism, which is created and maintained by third parties, essentially behaves like a black box.

Experts Say There Are Better Options

In Germany’s arguments against the EU’s efforts, Chief Prosecutor Markus Hartmann, Head of the Central and Contact Point Cybercrime North Rhine-Westphalia, said the EU was going to far in its proposals. Instead, he said law enforcement agencies should be better funded and supported so they could better combat CSAM using traditional techniques. Other experts agree with Chief Prosecutor Hartmann.

“Child protection is not served if the regulation later fails before the European Court of Justice,” said Felix Reda from the Society for Freedom Rights. “The damage to the privacy of all people would be immense “, he added. “The tamper-free surveillance violates the essence of the right to privacy and cannot therefore be justified by any fundamental rights assessment.”

“The draft regulation basically misses the goal of countering child abuse representations,” emphasized the Computer scientist and spokeswoman for the Chaos Computer Club, Elina Eickstädt (via computer translation). “The design is based on a gross overestimation of capabilities of technologies “, especially with regard to the detection of unknown material.

What Happens If the Legislation Passes?

If the EU is successful in passing the legislation, citizens will lose access to private communications platforms, such as Signal and Threema, as both platforms have vowed to pull out of the EU.

In due time, the issue will likely make its way to EU courts, and experts hope the legislation will be struck down there.

In the meantime, [as Matthew Green says](“the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.”), EU citizens will have to contend with “the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.”

]]>
608809
Microsoft Partners with Constellation to Revive Three Mile Island, Powering AI Data Centers with Carbon-Free Energy https://www.webpronews.com/microsoft-partners-with-constellation-to-revive-three-mile-island-powering-ai-data-centers-with-carbon-free-energy/ Sun, 22 Sep 2024 10:27:18 +0000 https://www.webpronews.com/?p=608689 In a landmark collaboration that underscores the intersection of energy innovation and advanced technology, Microsoft and Constellation have announced plans to reopen the dormant Three Mile Island (TMI) Unit 1 nuclear plant in Pennsylvania. This revival is not just about energy generation; it’s a significant move to meet the growing power needs of AI data centers, while simultaneously advancing carbon-free energy solutions.

Join our chat on Microsoft reviving Three Mile Island to power AI:

 

The project, renamed the Crane Clean Energy Center (CCEC), marks a significant milestone in the effort to decarbonize the energy grid and power the future of AI.

A Historic Site Reborn

The reopening of Three Mile Island Unit 1 is a monumental development, both symbolically and economically. TMI, which became synonymous with nuclear energy challenges following the 1979 partial meltdown of Unit 2, is once again in the spotlight. But this time, it’s for a positive transformation. As Joe Dominguez, CEO of Constellation, pointed out, “This plant was among the safest and most reliable nuclear plants on the grid before it was prematurely shuttered due to poor economics. We look forward to bringing it back with a renewed mission to serve as an economic engine for Pennsylvania.”

The Crane Clean Energy Center, named after former Exelon CEO Chris Crane, will operate independently of Unit 2, which remains shut down and is being decommissioned. Dominguez emphasized the importance of nuclear energy in powering critical industries: “Powering industries critical to our nation’s global economic and technological competitiveness, including data centers, requires an abundance of energy that is carbon-free and reliable every hour of every day.” The revived TMI Unit 1 will add approximately 835 megawatts of carbon-free power to the grid—enough to power several AI data centers and hundreds of thousands of homes.

Microsoft’s Vision for Carbon-Free Energy

Microsoft, a key player in the tech industry’s push toward carbon neutrality, is playing a pivotal role in this venture. Through a 20-year power purchase agreement (PPA), Microsoft will secure carbon-free energy from the CCEC to match the consumption of its data centers in the PJM Interconnection region. Bobby Hollis, Microsoft’s Vice President of Energy, remarked, “This agreement is a major milestone in Microsoft’s efforts to help decarbonize the grid. We continue to collaborate with energy providers to develop carbon-free energy sources to help meet the grids’ capacity and reliability needs.”

The partnership highlights the increasing need for sustainable, always-on power sources to meet the demands of AI and cloud computing. AI workloads, particularly generative AI models like those Microsoft is integrating into its services, are significantly more energy-intensive than traditional computing tasks. The power generated by TMI Unit 1 will help ensure that Microsoft’s data centers have the reliable, carbon-free energy needed to operate efficiently.

Economic and Environmental Impact

The reopening of TMI Unit 1 is not just a win for the environment, but also for the local economy. A recent economic impact study by The Brattle Group, commissioned by the Pennsylvania Building & Construction Trades Council, estimated that the CCEC will create 3,400 direct and indirect jobs and inject $16 billion into the state’s GDP. Moreover, the plant will generate over $3 billion in state and federal taxes during its operational lifespan.

The economic benefits extend beyond job creation. Governor Josh Shapiro praised the project’s broader significance, noting, “Pennsylvania’s nuclear energy industry plays a critical role in providing safe, reliable, carbon-free electricity that helps reduce emissions and grow Pennsylvania’s economy. The Crane Clean Energy Center will strengthen Pennsylvania’s legacy as a national energy leader.”

Community engagement and philanthropy are also part of the plan. Constellation has committed $1 million in philanthropic contributions to support workforce development and community needs in the Middletown area, where the plant is located. Londonderry Township Board of Supervisors Chair Bart Shellenhamer reflected on the long-term relationship between the plant and the community: “This unit was a good neighbor to Londonderry Township for 45 years, and the Crane Clean Energy Center will bring billions in new infrastructure investment to support area businesses, schools, and public services.”

Nuclear Energy’s Role in Powering AI

The revival of TMI Unit 1 is part of a broader trend as tech companies look to nuclear power to meet the growing energy demands of AI. AI data centers, which power technologies like Microsoft’s Azure OpenAI service, require vast amounts of electricity—often significantly more than traditional data centers. With wind and solar energy unable to provide constant power, nuclear energy is uniquely positioned to fill this gap, offering reliable, round-the-clock electricity.

Bobby Hollis of Microsoft underscored the alignment between AI’s needs and nuclear power: “We run around the clock. They run around the clock.” This mutual reliance on 24/7 power makes nuclear energy an attractive option for tech companies like Microsoft that have pledged to be carbon-negative by 2030.

The reopening of TMI Unit 1 also reflects a broader resurgence of interest in nuclear energy, both in the U.S. and globally. As nations work to electrify their economies and reduce reliance on fossil fuels, nuclear energy’s capacity to provide consistent, low-carbon power has garnered renewed attention. U.S. Congressman Scott Perry, who represents Pennsylvania’s 10th Congressional District, emphasized the importance of this trend: “This critical step forward will ensure Pennsylvania has sufficient baseload power to meet its needs for decades to come while producing 3,400 jobs in our community.”

Recommissioning the Plant is Complex

While the potential benefits of reopening TMI Unit 1 are substantial, the path to recommissioning the plant is complex. The U.S. Nuclear Regulatory Commission (NRC) must approve the restart, which will involve a comprehensive safety and environmental review, as well as updates to the plant’s turbine, generator, cooling systems, and control infrastructure. Constellation is aiming to complete the review and recommission the plant by 2028.

Despite these challenges, public support for the project is strong. According to a poll conducted by Susquehanna Polling & Research, Pennsylvanians favor restarting the plant by a 2-to-1 margin, with 70% of respondents supporting nuclear energy as a source of reliable, carbon-free power. This public backing is crucial as Constellation and Microsoft push forward with their plans.

Maria Korsnick, President and CEO of the Nuclear Energy Institute, highlighted the broader significance of the project: “The Crane Clean Energy Center is a fitting honor for a nuclear industry leader and will bring significant benefits to Pennsylvania and the nation. In addition to restoring jobs and clean, reliable energy, this investment will help the country meet its climate and energy independence goals.”

Powering the Future of AI

The convergence of artificial intelligence and the energy sector is rapidly transforming how we generate, distribute, and consume energy. As AI becomes more embedded in global infrastructure, it’s not just revolutionizing traditional industries, but it’s also reshaping the future of clean energy and sustainability. The partnership between Microsoft and Constellation to restart the Three Mile Island nuclear plant is a testament to how AI and energy innovation can create new economic opportunities while driving sustainable progress.

According to Bobby Hollis, Microsoft’s Vice President of Energy, this partnership is a significant milestone for the company’s goal of becoming carbon-negative. “Microsoft continues to collaborate with energy providers to develop carbon-free energy sources to help meet the grid’s capacity and reliability needs,” Hollis said. The power purchase agreement (PPA) between Microsoft and Constellation will allow the tech giant to leverage carbon-free nuclear energy to power its AI data centers. This collaboration marks a critical step in supporting the infrastructure required to fuel the increasing demands of artificial intelligence.

AI’s Role in Optimizing Energy Infrastructure

AI has already started to play a pivotal role in the energy sector. By integrating AI-driven insights, companies like Constellation and Georgia Power are not only optimizing the performance of existing infrastructure but also preventing potential failures before they occur. As Kevin L. Jackson from Supply Chain Now noted, AI enables continuous monitoring of energy assets, “reducing downtime, extending asset lifespans, and optimizing energy production by accurately predicting demand.” This predictive capability reduces operational costs and increases efficiency, making it a valuable tool for nuclear plants like Three Mile Island.

Joe Dominguez, CEO of Constellation, highlighted the reliability and consistency that nuclear power offers in addressing global energy needs: “Powering industries critical to our nation’s global economic and technological competitiveness requires an abundance of energy that is carbon-free and reliable every hour of every day.” This consistency is particularly crucial as AI applications demand vast amounts of energy, often around the clock.

Leveraging AI for Sustainable Growth

In addition to improving operational efficiency, AI is a critical enabler for sustainable growth. A recent report from McKinsey estimated that AI could enable the energy sector to realize up to $1.3 trillion annually by 2035 through innovations that streamline processes and optimize energy usage. By facilitating the integration of renewable energy sources, AI can better balance energy supply and demand, allowing companies like Microsoft to meet their net-zero goals more effectively.

AI also plays a role in environmental conservation efforts. For instance, Pacific Gas and Electric (PG&E) has deployed AI to detect wildfires before they spread, using high-definition cameras and advanced algorithms to monitor large swaths of land. As AI continues to evolve, its potential applications in the energy sector seem limitless, from improving grid reliability to accelerating the adoption of clean energy technologies.

The Economic and Social Impact

The restart of Three Mile Island’s Unit 1 will not only supply 835 megawatts of carbon-free electricity to the grid but also create approximately 3,400 direct and indirect jobs, generating significant economic benefits for Pennsylvania. An economic impact study by the Brattle Group, commissioned by the Pennsylvania Building and Construction Trades Council, revealed that the project would contribute an estimated $16 billion to the state’s GDP and over $3 billion in state and federal taxes.

Rob Bair, President of the Pennsylvania State Building and Construction Trades Council, emphasized the long-term impact of these efforts: “The CCEC will support thousands of family-sustaining jobs for decades to come.” The collaboration between Microsoft and Constellation highlights how tech companies can play a central role in reshaping local economies while contributing to national and global energy goals.

The Global Push for Carbon-Free Energy

Governments worldwide are recognizing the potential of nuclear energy in the fight against climate change. Josh Shapiro, Governor of Pennsylvania, affirmed the importance of the Crane Clean Energy Center in sustaining Pennsylvania’s energy leadership. “My Administration will continue to work to cut energy costs and ensure the reliability of our energy grid so that Pennsylvanians can have access to affordable power made right here in Pennsylvania,” Shapiro said. The commitment to revitalize nuclear power reflects a broader global movement toward cleaner energy solutions.

Additionally, Michael Goff, Acting Assistant Secretary for the Department of Energy’s Office of Nuclear Energy, underscored the critical role that always-on nuclear power will play in meeting the country’s growing energy demands. “Always-on, carbon-free nuclear energy plays an important role in the fight against climate change and meeting the country’s growing energy demands,” Goff said.

Energy Sector Must Keep Pace

As the world accelerates towards a future driven by AI and digital transformation, the energy sector must keep pace with the increasing demand for clean, reliable power. The Crane Clean Energy Center, with its emphasis on innovation, sustainability, and economic growth, represents a powerful example of how partnerships between technology giants like Microsoft and energy leaders like Constellation can reshape the global energy landscape. AI will not only enhance operational efficiencies but also unlock new growth opportunities and set the stage for a carbon-free future. As Joe Dominguez aptly stated, “Before it was prematurely shuttered, this plant was among the safest and most reliable nuclear plants on the grid, and we look forward to bringing it back with a renewed mission to serve as an economic engine for Pennsylvania.”

]]>
608689
The CTO’s Essential Role in Start-Up Expansion Strategies https://www.webpronews.com/start-up-expansion-strategies/ Fri, 20 Sep 2024 09:12:37 +0000 https://www.webpronews.com/?p=608583 Tune in to our chat on why the CTO is key to driving start-up expansion!

 

From Start-Up to Scale-Up: The CTO’s Role in Growth and Expansion

 As startups grow and expand the importance of Chief Technology Officers (CTOs) grows well. CTOs play a role in ensuring that the technology systems and teams of the company are prepared to manage growth and development. This piece delves into the duties of CTOs, in guiding startups towards lasting success and a strong position, in the market.

The Importance of the CTO’s Role in a Company’s Growth Journey

Technology officers play a role, in guiding start ups through the challenges of growth. Utilizing the services provided by a CTO can help companies align their progress with their business goals. The flexibility and innovation skills of a CTO are essential, for staying in an evolving market.

Laying the Foundation for Scalability

Crafting an adaptable framework is fundamental, to the success of any CTOs strategy. This entails choosing the technology stack of accommodating future growth and implementing effective practices for managing code conducting tests and deploying solutions. The CTO as a service model assists startups in making informed decisions regarding these components.

Developing a technology roadmap is also crucial. This roadmap should detail how technological advancements will be utilized to meet business objectives and ensure that technology investments are strategically aligned with the companys overarching goals. Collaboration with executives plays a role, in seamlessly integrating technology initiatives into the companys overall business strategy.

Building and Leading High-Performing Technology Teams

Attracting and keeping talent is an aspect of a CTOs role. By creating an creative work environment CTOs can assemble a team of propelling the companys progress. Fostering an engineering culture is key, to fostering innovation and productivity. CTOs should promote learning and experimentation to maintain team motivation and forward thinking.

Utilizing methodologies is vital for enhancing efficiency and adaptability. By implementing these approaches the team can swiftly adapt to changes. Deliver top notch products. The cto as a service concept provides solutions, for adjusting team size as necessary.

Ensuring Data Security and Compliance

Ensuring cybersecurity remains a priority, for any growing business. Chief Technology Officers (CTOs) are tasked with putting in place security measures to safeguard customer data and prevent security breaches. This involves conducting security audits and making necessary updates to protect against evolving threats. Adhering to data privacy regulations is essential for upholding trust and avoiding complications. It is crucial for CTOs to keep abreast of laws and ensure that the companys practices align with requirements.

Consistently evaluating and enhancing security protocols is vital for risk mitigation. CTOs should adopt an approach, in identifying and resolving vulnerabilities before they can be exploited.

Optimizing Technology for Customer Acquisition and Retention

Utilizing data driven strategies is key, for customer outreach and loyalty. Chief Technology Officers (CTOs) should use data analysis to grasp customer habits and adjust plans accordingly. Working closely with marketing and sales departments is vital for refining the customer experience. CTOs have the opportunity to offer guidance and resources to boost these teams performance.

Customization and recommendation systems play a role, in elevating user satisfaction. Introducing these tools offers content enhancing customer happiness and fostering long term customer relationships.

Scaling Technology Infrastructure

Ensuring optimal system performance is crucial when dealing with traffic volumes. Chief Technology Officers (CTOs) need to guarantee that the infrastructure can expand effectively to accommodate rising requirements. Automation plays a role, in simplifying operations and minimizing involvement thereby improving overall productivity.

Utilizing cloud technologies provides versatility and cost efficiency making them well suited for expanding businesses. CTOs should make use of these technologies to establish an dependable infrastructure.

Managing Technical Debt and Ensuring Code Quality

Keeping code quality, in check is crucial for handling debt. CTOs need to evaluate and prioritize debt to prevent any obstacles to advancement. It’s vital to establish code review protocols and quality assurance practices to uphold top notch standards.

Maintaining a balance between advancing features and refining code is key, for progress. CTOs should focus on development of features while also enhancing and preserving the current codebase.

Conclusion

Chief Technology Officers (CTOs) play a role, in propelling a companys advancement and development through establishing a technological framework forming effective teams safeguarding data integrity and optimizing technology for attracting and retaining customers. The ability to adapt, scale and collaborate are qualities for CTOs to succeed allowing them to overcome the obstacles of business expansion and stimulate growth.

CTOs should see themselves as catalysts for growth. By utilizing their knowledge and fostering creativity CTOs can have an impact on their company’s achievements and competitive edge, in the market.

]]>
608583
Microsoft Releases Windows App On Most Platforms https://www.webpronews.com/microsoft-releases-windows-app-on-most-platforms/ Thu, 19 Sep 2024 22:45:39 +0000 https://www.webpronews.com/?p=608551 Listen to our conversation about Microsoft releasing its Windows app on all platforms:

 

Microsoft has released its Windows App on most major platforms—Windows, macOS, iOS, iPadOS, Android, and web browsers—giving users a central way to connect to Windows.

Windows App is designed to give users a unified way to connect to and manage Windows in the cloud, enabling productivity on-the-go.

With Windows App, you can enjoy a unified experience that makes it simple for people to connect to the Windows experience they know and love from any device. Enhance productivity with features such as customizable home screens, multi-monitor support, and USB redirection. Windows App also offers advanced security features, including multifactor authentication, to ensure a seamless and robust connection and enable efficient work from any location, at any time.

Windows App provides a consistent, reliable experience for all devices, enabling secure access from any location. Whether you need to connect to Windows 365, Azure Virtual Desktop, Remote Desktop, Remote Desktop Services, or Microsoft Dev Box, Windows App simplifies the process, allowing you to manage and utilize these resources from a single, intuitive app. Whether you are an IT administrator or an end user, Windows App provides immense value. IT admins benefit from enhanced security and streamlined management, while end users can tailor their experience to fit their personal workflows.

Interestingly, although Microsoft claims Windows App is available on all platforms, Linux is notably absent from the list, and Android is still in public preview. The absence of Linux is particularly interesting, given how popular the OS is with system admins and developers, not to mention its use within Microsoft. Nonetheless, Linux users should be able to use the service via web browser.

https://youtu.be/j0XU59VbKOc?feature=shared
]]>
608551
Ubuntu 24.10 Will Improve Snap Permissions https://www.webpronews.com/ubuntu-24-10-will-improve-snap-permissions/ Tue, 17 Sep 2024 11:30:00 +0000 https://www.webpronews.com/?p=608286 Canonical is adding some much-needed improvements to Snaps’ permission in Ubuntu 24.10, adding permission prompts to give users more control over Snaps.

Snaps are containerized apps Canonical has pioneered in an effort to solve the complexity surrounding traditional Linux packages. Snaps, like competing Flatpaks, have all their dependencies in a self-contained package, rather than relying on the system’s libraries. As a result, Snaps provide a way to have the latest applications, even on older Linux distros. In addition, Snaps provide a measure of sandboxing, because the app is self-contained.

Listen to a podcast conversation on Ubuntu 24.10, and it’s new experimental feature!

 

Ubuntu Snaps Permission Prompt – Credit Canonical

In an effort to improve Snaps’ sandboxing and security, Ubuntu 24.10 will introduce permission prompts, as described by Oliver Smith, Interim Engineering Director for Ubuntu Desktop.

Hi folks! As a bonus update ahead of the main September post I want to switch things up a bit and introduce you to an experimental new feature landing in the Ubuntu 24.10 dailies soon.

Permissions prompting is a critical tool for privacy and security conscious users to control, manage and understand the behaviour of applications running on their machine. This implementation represents a significant step forward in application security, and distinguishes itself from traditional XDG Desktop Portals by enabling fine-grained access control over unmodified binaries without requiring changes to the application code. By leveraging Ubuntu’s AppArmor implementation, prompting enforces sandboxing and mediates access at the system call level to ensure that every action is tightly controlled and subject to user consent, even for applications that are entirely unaware of this mediation.

The snapd, security and desktop teams at Canonical have collaborated closely over a number of years to bring this feature to life and we’re excited to deliver an initial opt-in implementation to Ubuntu 24.10 focussed on home interface permissions so that we can refine the experience based on your feedback.

Ubuntu Snaps Permission Prompt – Advanced – Credit Canonical

Smith emphasizes that the permissions prompting is a work in progress, but new features and abilities will be added.

In this release the Security Center is the home for managing your prompt rules, over time we will expand its functionality to cover additional security-related settings for your desktop such as encryption management and firewall control.

As always, the demo above represents a work in progress, with further UI improvements still to land over the next few weeks ahead of release (and beyond). This implementation, as an opt-in feature, is designed to surface as much information to the user as possible regarding what actions the application is attempting to perform, what permissions you will be granting and their duration. We expect this to be iterated over based on user feedback.

Permission prompting is a welcome improvement for Snaps and will help managing them much easier, while improving the security and stability of Ubuntu.

]]>
608286
Protecting Your Web Apps in the Age of Cloud Computing https://www.webpronews.com/web-apps-cloud-computing/ Mon, 16 Sep 2024 16:59:09 +0000 https://www.webpronews.com/?p=608250 As hacking threats evolve in complexity, securing your web applications feels like an uphill battle. Between keeping software updated, monitoring for misconfigurations, and analyzing traffic patterns, it’s a lot to juggle. This complexity only increases exponentially when you move web apps to the cloud.

With more attack surfaces and data now outside your firewall, you absolutely need to re-evaluate security in a cloud context. But don’t panic! While the cloud introduces new risks, providers also give you powerful tools to lock things down.

Follow cloud-focused security best practices, and your organization can confidently reap benefits like scalability and cost savings without compromising safety. This guide explores the top challenges you’ll face along with tactical measures for protecting cloud-hosted apps and data. 

Key Security Challenges in Cloud Environments

Adopting cloud computing certainly provides advantages, but you also need to be aware of the new attack surfaces and vulnerabilities you may be exposed to.

Shared Responsibility Model

In the cloud, providers like AWS and Azure are responsible for securing the underlying infrastructure and hardware. But you are responsible for securing everything deployed on top – including web apps, data, and cloud configuration. This “shared responsibility” model means you carry more weight for cloud security than you think.

For example, misconfiguring a cloud database to allow public access could expose sensitive customer data. The provider secured the database software and infrastructure, but you were responsible for the database access permissions. 

Increased Attack Surface

By nature, cloud environments have more points of entry spread across more services and users. Take AWS – with its over 200 services, each component and connection between components represents a potential attack vector. As you deploy more cloud resources, you increase the “blast radius” hackers could exploit to breach other parts of your infrastructure if not properly locked down.

Yet, despite these risks, 66% of organizations currently use more than 25 SaaS applications. It’s not much of a surprise that 44% of them have experienced a cloud data breach at some point. 

Data Security and Privacy Concerns

Migrating data to the cloud necessarily means relinquishing physical control. You risk exposing sensitive customer and internal business data without proper encryption and access controls.

Hackers have lots of motivations for targeting weakly protected cloud data stores – from accessing usernames/passwords for broader identity theft, to holding databases for ransomware attacks against your organization.

Not addressing cloud data security could land you in legal hot water as well. Remember that you are liable for any data mishandled or exposed by cloud providers under most regulatory compliance frameworks like HIPAA and PCI DSS.

Essential Security Measures for Cloud-Based Web Apps

So, now that you understand the primary cloud security challenges, what can you actually do about them? Here are four essential measures you should implement right away to protect your cloud-hosted web apps:

Secure Cloud Configuration and Access Control

Properly configuring cloud resources is at the heart of the “shared responsibility” model. Make security and access control central to your cloud infrastructure strategy from day one.

For each cloud component and service, apply strict permissions and monitor closely for misconfigurations that could unintentionally expose data. Double-check settings like S3 buckets and database connections that are internet-facing by default.

Tools like AWS Config and Azure Policy help systematically enforce and validate proper security configurations across cloud environments at scale. Make use of them.

Web Application Firewalls (WAFs)

Deploying a robust Web Application Firewall (WAF) should be a cornerstone of your cloud security strategy. WAFs act as protective barriers for web apps by filtering incoming HTTP/HTTPS traffic and blocking common web-focused attacks.

Your standard firewalls and network security groups configured at the infrastructure level can’t provide application-layer protection against attacks targeting vulnerabilities in code or web-visible inputs. That’s where WAFs come in – they “understand” web traffic to filter malicious payloads designed to exploit web apps specifically.

For example, a cloud WAF can detect and prohibit attempts to inject malicious SQL commands in input fields that could trick the database behind your web app. Or it can stop cross-site scripting (XSS) attacks that try to inject browser-executable JavaScript into responses rendered by your app’s front end. Unfortunately, these and other “Layer 7” attacks are quite common but readily defended against by WAF policies once you have them in place.

Another major benefit of WAFs in the cloud is protection against distributed denial-of-service (DDoS) attacks—those frustrating bandwidth-flooding events that can bring your web services to their knees. By absorbing and dispersing excess traffic volumes before they reach your apps, a cloud-based WAF acts like a bodyguard, taking the brunt of attacks on your behalf.

Vulnerability Management and Patching

Hackers have become proficient at chaining exploits against known vulnerabilities in web frameworks like Struts and common dependencies. Actively monitoring all software components and libraries for patches prevents attackers from gaining an easy foothold into your web apps.

Cloud services increasingly automate vulnerability tracking and patching workflows – use them rather than relying on manual efforts. AWS Inspector, for example, regularly scans deployments for common issues and vulnerabilities across application stacks, operating systems, and databases.

Data Encryption at Rest and in Transit

Encrypt sensitive web app data while storing “at rest” in databases and cloud storage buckets and moving “in transit” across networks. Proper encryption is the last defense line if other perimeter security measures fail.

Apply database and object-level encryption rather than relying on transport encryption during data transfers. AWS and Azure make applying robust encryption more convenient than most organizations can achieve – so take advantage of native capabilities.

Taking Control of Your Cloud Security Posture

The cloud security best practices I outlined are just the beginning. Here are some additional priority areas you should address:

Centralize Identity and Access Management

The expanding cloud attack surface makes managing identities and access that much more critical. To limit breach impact, centralize identity through single sign-on rather than individual credentials per service.

Apply the principle of least privilege to permission user access, and leverage multi-factor authentication for all cloud admin accounts. Tools like Azure Active Directory and AWS IAM let you manage access at scale.

Institute Data Loss Prevention Controls

Preventing data exfiltration should be a top concern when shifting to the cloud. Data loss prevention (DLP) solutions analyze and automatically block sensitive data like credit card numbers from leaving environments.

Integrate DLP capabilities at multiple layers when handling regulated data or IP – into data warehouses, web application firewall rules, cloud storage permissions, and endpoint agents.

Architect with Security in Mind

Rather than bolting on security as an afterthought, bake it into the design of cloud architectures from the initial build phase.

Segmentation, protection of sensitive data flows, logging, and compliance auditing need to be structural components of technology stacks – not tacked on at the end where visibility and control are limited.

Promote Security Training for Cloud Teams

Your own staff likely poses the biggest insider threat as they interact with cloud environments daily. Expand security training to cover common cloud risks, proper access permissions, and data handling so mistakes don’t turn into breaches.

Include developers, engineers, data analysts, and operators that help manage cloud infrastructure and web architectures.

Final Word

The bottom line is that comprehensively securing cloud environments requires thinking beyond just checking boxes on common best practices. Continually assess additional focus areas – from deeper identity lifecycle management to compliance automation and disaster recovery preparedness – as part of your cloud security strategy.

]]>
608250
CISA Warns of Critical Ivanti Vulnerability Being Exploited https://www.webpronews.com/cisa-warns-of-critical-ivanti-vulnerability-being-exploited/ Mon, 16 Sep 2024 15:05:28 +0000 https://www.webpronews.com/?p=608241 The Cybersecurity & Infrastructure Security Agency is warning of a critical vulnerability in Ivanti Cloud Services Appliance (CSA) that is being actively exploited.

Ivanti issued a security advisory for CSA 4.6 to address a high severity vulnerability that could give attackers unauthorized accesses to devices running a CSA.

An OS command injection vulnerability in Ivanti Cloud Services Appliance versions 4.6 Patch 518 and before allows a remote authenticated attacker to obtain remote code execution. The attacker must have admin level privileges to exploit this vulnerability.

To make matters worse, CSA 4.6 is End-of-Life (EOL), limiting availability of future updates.

Please note: Ivanti CSA 4.6 is End-of-Life, and no longer receives patches for OS or third-party libraries. Additionally, with the end-of-life status this is the last fix that Ivanti will backport for this version. Customers must upgrade to Ivanti CSA 5.0 for continued support. CSA 5.0 is the only supported version and does not contain this vulnerability. Customers already running Ivanti CSA 5.0 do not need to take any additional action.

CISA is now warning agencies of the vulnerability, instructing them to immediately take measures to mitigate the risk.

CISA recommends users and administrators review CISA and FBI’s joint guidance on eliminating OS command injections and the Ivanti security advisory and apply the recommended updates.

Note: CISA has added CVE-2024-8190 to its Known Exploited Vulnerabilities Catalog, which, per Binding Operational Directive (BOD) 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities, requires Federal Civilian Executive Branch (FCEB) agencies to remediate identified vulnerabilities by the specified due date to protect FCEB networks against active threats.

Because Ivanti CSA 4.6 is EOL, however, CISA is recommending agencies take the additional step of replacing it, since it will not receive future security updates.

Action: As Ivanti CSA has reached End-of-Life status, users are urged to remove CSA 4.6.x from service or upgrade to the 5.0.x line of supported solutions, as future vulnerabilities on the 4.6.x version of CSA are unlikely to receive future security updates.

]]>
608241
Breaking Into Cybersecurity in 2024: Do You Have What It Takes to Succeed? https://www.webpronews.com/breaking-into-cybersecurity-in-2024-do-you-have-what-it-takes-to-succeed/ Mon, 16 Sep 2024 05:52:36 +0000 https://www.webpronews.com/?p=608209 Cybersecurity is becoming more complicated and entangled with all aspects of business and society, and with that comes a growing demand for skilled information security (InfoSec) professionals. But breaking into this field in 2024 is no easy feat. While headlines often highlight the shortage of cybersecurity professionals, the reality is that entering the industry can be challenging. InfoSec requires not only technical expertise but also a mindset built on curiosity, problem-solving, and resilience. In this deep dive, we explore the key traits, skills, and knowledge that aspiring InfoSec professionals need to succeed in 2024.

The Harsh Reality: Cybersecurity Is Tough

“Cybersecurity is hard,” says a seasoned Security Operations Analyst (SOC) who has been working in the field for years. While many assume that the abundance of job openings makes it easy to enter InfoSec, the truth is more nuanced. “There are too many things to understand when you’re just starting out—everything from basic computer knowledge to the security architecture of an entire organization,” the analyst explains. Cybersecurity is a highly technical field, and newcomers can easily become overwhelmed by the complexity of its systems.

Unlike many other IT fields, there is no fast track into InfoSec. “A lot of people don’t start in cybersecurity,” the analyst continues. “They start in more general IT roles, like help desk positions, where they can build a solid foundation of knowledge.” By working in roles that involve diagnosing hardware and software issues, aspiring professionals can develop the skills needed to transition into more security-focused roles over time. Building this foundation is critical because cybersecurity requires a deep understanding of how different systems work together and how to secure them effectively.

The Key Traits of a Successful InfoSec Professional

Breaking into InfoSec requires more than just technical knowledge. “One of the most important characteristics you need is the drive to learn,” explains a SOC Analyst. InfoSec professionals must stay on top of the latest threats, vulnerabilities, and technologies. The constantly changing nature of the field means that those who are curious and committed to continuous learning will thrive, while those who expect to stop learning once they’ve secured a job will struggle to keep up.

“If you hear about a company getting hacked on the news, the average person will just move on,” the analyst explains. “But if you’re the type who starts Googling why and how the hack happened, that curiosity is a great sign that you’ll be successful in cybersecurity.” This innate drive to investigate, learn, and understand beyond the surface level is what separates the best InfoSec professionals from the rest.

In addition to curiosity, technical aptitude is a must. “You need to understand the basics of how systems interact—things like securing new devices, cloud security solutions, network traffic, and access control,” says the analyst. Without this foundational knowledge, it will be difficult to navigate the complex ecosystems that InfoSec professionals are responsible for protecting. From securing endpoints to understanding how web traffic should be monitored, knowing how to address each of these layers is crucial.

Key Knowledge Areas and Skills for Success

One of the biggest misconceptions about InfoSec is that it’s all about hacking or penetration testing, as seen in popular media. However, the most common entry-level role in cybersecurity is that of a Security Analyst, typically on the “blue team,” responsible for defending against attacks. “Most people won’t start as a penetration tester,” the SOC Analyst says. “They’ll likely begin by monitoring systems, responding to incidents, and remediating vulnerabilities.”

The analyst emphasizes that this role can be overwhelming at times, especially when you’re flooded with alerts and incidents. “It can feel like a constant grind, and burnout is common,” they note. With the growing sophistication of attacks—especially as artificial intelligence (AI) evolves to power tools like deepfakes and adaptive email scams—the job is only getting harder. “AI is making it more difficult for us to defend against certain threats, especially phishing scams that trick people into sharing sensitive information.”

Given the increasing complexity of the field, InfoSec professionals need a wide range of skills to succeed. Some of the key technical skills include:

  • Understanding networking and infrastructure: Security starts with knowing how systems communicate. Understanding IP addressing, firewalls, VPNs, and network traffic is foundational to any InfoSec role.
  • Familiarity with security tools: Endpoint Detection and Response (EDR) tools like CrowdStrike, firewall solutions like Palo Alto, and monitoring platforms like Splunk are widely used. Knowing how to use these tools effectively is crucial.
  • Knowledge of cloud security: With more organizations moving to cloud environments like AWS, Azure, or Google Cloud, it’s essential to understand cloud security concepts, such as identity and access management (IAM), security groups, and encryption.
  • Coding skills: While InfoSec professionals may not be building applications, understanding programming languages like Python and SQL helps with automating tasks and analyzing logs.

“A lot of our job involves automating repetitive tasks,” says the analyst. For instance, one of their daily tasks involved manually uploading suspicious PDF attachments to a sandbox environment for investigation. “I automated that process, freeing up time to focus on more critical tasks.” Being able to automate workflows is a powerful skill in cybersecurity, allowing professionals to spend more time addressing advanced threats rather than routine processes.

The Importance of Resilience and the “Grind”

Breaking into cybersecurity is not a quick process. “When you’re starting out, you need to appreciate the grind,” the SOC Analyst warns. For many, this grind involves working longer hours, self-study, and continuous training. “When I first started, I didn’t know much about security concepts, so I had to grind hard to catch up,” the analyst recalls. Whether it’s learning how to use security information and event management (SIEM) systems like Splunk or mastering network security concepts, you need to be prepared to dedicate personal time to learning.

This grind can take a toll, particularly on those new to the field. Long hours, challenging incidents, and a steep learning curve can lead to burnout. However, the rewards are substantial for those who can persevere. “The key to staying consistent with the grind is discipline. I blocked out time in my personal schedule for self-study every day. If you don’t make time for it, you won’t progress.”

Resilience is another critical trait that cybersecurity professionals need to cultivate. “The reality is, the threats never stop,” says the analyst. “You could prevent 99 attacks, but the one that gets through is what everyone will focus on. That’s part of the job—you have to be ready to keep going, even after setbacks.”

Preparing for a Career in InfoSec

So, how can aspiring InfoSec professionals prepare for success in 2024? The SOC Analyst offers several practical tips for getting started. First, they recommend pursuing foundational IT certifications like CompTIA’s A+, Security+, and Network+ to build a baseline understanding of computers, networks, and security concepts. “Professor Messer’s free YouTube playlist is a great resource if you’re just starting,” the analyst suggests. After building a foundation, they recommend expanding into more specific certifications like Certified Information Systems Security Professional (CISSP) or Certified Ethical Hacker (CEH), depending on your goals.

Additionally, hands-on experience is crucial. “Build a home lab,” advises the analyst. “It’s a simulation of an environment where you can practice securing devices, managing firewalls, and monitoring traffic.” Using platforms like Splunk’s free trial can help aspiring professionals get hands-on experience with the tools used by SOC teams.

Creating projects that simulate real-world scenarios—such as investigating network traffic for signs of compromise—will also give job seekers a strong edge in interviews. “Showing an interviewer that you’ve taken the initiative to build your own lab is far more impressive than just talking about certifications.”

Can You Make It in InfoSec in 2024?

The cybersecurity field offers exciting opportunities, but it also comes with challenges. As threats evolve and organizations become more reliant on technology, the role of InfoSec professionals will only become more critical. But success in this field requires more than technical skills—it demands curiosity, resilience, and a relentless drive to learn.

“Cybersecurity is not for the faint of heart,” the analyst emphasizes. “But if you’re passionate about solving puzzles, protecting systems, and constantly learning, it’s one of the most rewarding fields you can enter.”

In 2024, as AI-driven threats and digital transformations increase the complexity of cybersecurity, the need for well-rounded, highly skilled InfoSec professionals will continue to grow. For those willing to put in the work, the opportunities are endless—so, do you have what it takes to become an InfoSec professional in 2024?

]]>
608209