Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    In search of an easier life: Do IT converged systems fit the bill?
    Automation for the Common (Sysadmin) People
    http://www.theregister.co.uk/2015/06/25/automation_for_the_people/

    Could converged systems change the way that IT admins spend their time? Figures suggest that mundane tasks such as backups and restores and system patches take between two and ten hours a week for around a third of those responsible for administering systems.

    Even more time is spent monitoring systems to ensure that they’re running smoothly. Automation would be a great way to solve some of these problems, but with administrators often having to juggle multiple, diverse systems from different vendors, that hasn’t always been an option.

    Automation is certainly possible without a converged system, but it is more difficult. Bolting together compute, storage, and network components from different vendors into a system of your own involves the use of different interfaces.

    Typically, a sysadmin will script tasks to carry out operations on each of these three components, but how many of them will have the time to write scripts that control compute, networking, and storage in unison, while monitoring and reacting to the feedback from all three? It’s doable, but for many, it will be a challenge.

    The benefits of a converged infrastructure lie in simplicity for IT admins. Instead of trying to juggle different vendors’ systems while carrying out complex IT tasks, they can rely on a single, integrated system that manages a lot of things behind the scenes.

    What to automate

    In either case, you’ll typically see a ‘single pane of glass’ management interface, designed to provide easy access to a variety of tasks that may have taken multiple manual steps in a different environment.

    The marriage of compute, network and storage allows for the automation of many tasks that have typically been manual processes for IT administrators. One example is the patching of systems. This is something that may have required manual intervention before, because systems had to be put into a patch-ready state.

    Software-defined approaches to the management of hyperconverged systems now make that easier to automate. Software can be scripted to cycle hosts into maintenance mode before applying the relevant system patches and spinning back up again.

    Service delivery

    What does all this mean for business managers? After all, whenever an IT department can find a way to make an IT benefit explicit to the business, it should seize the opportunity.

    The key phrase here is service delivery. IT organisations can exploit the automation capabilities of a converged system to offer more flexible, responsive services to business users.

    One example here is the provisioning of services, rather than simply raw VMs. Creating VMs is a manual task. Creating a VM with associated applications and appropriate storage and network capacity is a service.

    This too can be automatically fulfilled under converged infrastructure systems, because the tasks involved with provisioning – from bare metal bootstrapping through to allocation of logical resources – can be strung together and handled under one set of scripts.

    Distributed systems management

    Many of these automated tasks and processes are reactive – they help to streamline the tasks associated with incoming requests and problems. That’s great for IT departments trying to free themselves from a constant cycle of firefighting. But is that all it can do?

    Some automation tasks within a converged or hyperconverged infrastructure are preventative. Consider system monitoring, which can detect emerging problems in the compute/network/storage ecosystem and then be configured to automatically create service tickets for them.

    This automation could also be extended into other areas, such as capacity planning, so that sysadmins could be alerted to predicted capacity changes, for example.

    Reply
  2. Tomi Engdahl says:

    BMW: ‘Our competitor is not Audi, Jaguar Land Rover or Mercedes but consumer electronics players’
    http://www.computerworlduk.com/news/data/bmw-our-competitor-is-not-audi-jaguar-land-rover-or-mercedes-but-consumer-electronics-players-3616944/

    BMW is internalising formerly outsourced IT operations to get better control of its digital footprint

    BMW is bringing software back in-house so it can deliver seamless digital experiences for its customers – something more valued than horsepower or engines in today’s market, its digital business models lead said.

    It has turned its focus to clawing back crucial customer data that has previously been left behind with dealerships, partners and dashboard infotainment system manufacturers in a bid to learn more about its drivers and prevent churn.

    “Our competitor is not Audi, Jaguar Land Rover or Mercedes but the space of consumer electronics players. This is a big big shift for car industries: to have that attention to detail in digital car experiences in a seamless way”, said Dieter May, BMW’s digital business models senior vice president.

    This means that BMW needs to apply a certain criteria to each product it releases like integration, security, predictive capabilities, personalisation, whether it is smart, and lastly, “delightful”, he added.

    Shunning an outsourcing model for in-house products built on cloud platforms like Microsoft Azure and Amazon are key for turning around software builds from “year cycles to daily cycles” so an internal development team have a better control over change requests and other processes when iterating.

    Further, providing brand loyalty in the same way Apple does through digital is key to preventing churn, May said. But this means, like many firms across all industries, BMW will “need to massively change our culture to a customer centric model.”

    He added: “Only 4 percent of installed car base in the world is used at any time – this is probably the biggest waste of capital in the world.”

    Yet the car industry is in a privileged position due to the role it plays in the Internet of Things; BMW models have “100 different sensors” that have yet to be exploited, May said.

    Reply
  3. Tomi Engdahl says:

    Bank of England CIO: ‘Beware of the cloud, beware of vendors’
    Old Lady grumbles about new thingy
    http://www.theregister.co.uk/2015/06/25/bank_of_england_no_public_cloud/

    The Bank of England is loosening up on IT delivery and recruitment, but not its resistance to public cloud.

    John Finch, CIO of the UK’s central bank since September 2013, Wednesday ruled out the use of any public cloud by the bank for the foreseeable future.

    Cloud has however crept into the Bank’s IT margins, where it’s been working with firms on the new plastic bank notes that debuted in March from Clydesdale Bank.

    “One area where it’s changed, is we have to share details on design of the new bank node with people who make the machines that process them — we have built a hybrid private cloud for them to connect to, so at the margins of what we do,” he conceded.

    However, speaking at the Cloud World Forum in London, Finch ruled out any role for cloud in the Bank’s core IT systems and infrastructure, reiterating an announcement first made in 2014.

    But, Finch estimates if the reasons for going cloud is to save money, you shouldn’t go to the cloud. “Beware of the cloud and beware of the vendors,” Finch warned. “All those messages I gave a year ago, I passionately believe.”

    “Make sure you understand where your data resides, make sure you understand the details of your contract, make sure you understand the security, and make sure you stay in control,” he said.

    The bank’s IT hiring policy is also striving for greater diversity – by age, sex and ethnicity – incorporating new graduated recruitment and school-leaver apprenticeship programs. In the past, he joked, to get a Bank of England job you’d need to have a first from Oxford or Cambridge, or to have been very bright at Imperial College London, and male.

    “Particularly in technology we want to recruit people who we wouldn’t normally recruit – specky, geeky kids hacking in their bedroom,” he said. The philosophy is fresh thinking and ideas will flow from diversity and cause disruptive change for the Bank.

    Reply
  4. Tomi Engdahl says:

    Whoops, there goes my data! Hold onto your privates in the Dropbox era
    Shake off your sluggishness and learn to live with shadow IT
    http://www.theregister.co.uk/2015/06/24/preserve_enterprise_security_age_of_dropbox/

    Your users are probably using cloud-based services that you’re not even aware of to organise their files and collaborate with each other. What are you going to do about it?

    “Shadow” IT — cloud services bought from third-party providers without authorisation by the IT department — is becoming a significant problem for many companies, even if they don’t know it yet.

    Canopy, the Atos cloud brand, recently conducted a survey of 350 IT decision makers across the UK, Germany, France, the Netherlands and the US. Half of the line of business managers reckoned between five and 15 per cent of their departmental budget was spent on shadow IT, amounting to €8.6m.

    And 60 per cent of the CIOs surveyed said that shadow IT drained around $13m on average from their organisation last year.

    Companies often only refresh their IT in a major way every decade or so, according to Thales Security cybersecurity practice lead Sam Kirby-French.

    “Part of it is that the IT department isn’t supporting the user well enough, and the user wants to make their own life as easy as possible, so they will use alternatives,” he said. “And it’s difficult to stop them using those alternatives.”

    The Canopy survey said more than two-thirds of respondents viewed their IT department’s sluggishness as a key factor that would push departments further into the arms of third-party service providers.

    This unresponsiveness manifested itself as a failure to sanction short-term pilots quickly enough, and to host products for launches in a timely enough way.

    What kind of policy can the IT department put in place to stop naughty users from exposing corporate data in the cloud? The most draconian one is the grumpy cat approach: simply blacklist everything.

    In the first quarter of 2015, the average firm used 923 distinct cloud services, the Skyhigh Networks estimates.

    “We are adding 100 new cloud services to the registry every week,” he explained. “Old-style web filters find it difficult to work out where to put them.”

    Typically, URL blockers will have a few tens of categories for different sites, ranging from porn to social networks, entertainment and sports. “Where do you put a cloud service that could be used for many different things?” Hawthorn asks.

    In any case, if you just try to block everything, you often achieve the opposite effect, pushing your users away from well-established and reputable sites into specious online apps run out of someone’s shed. Far better Dropbox, say, than Yuri’s MegaBling Filesharing Service.

    Alternatively, they will simply find other ways of accessing the mainstream cloud services that they were using before. Once, people would bring modems into their office to get dial-up access to the internet at work. Today, 4G “Mi-Fi” hotspots and rogue Wi-Fi access points are an alternative.

    “It’s a device that the laptop thinks is a hotspot, and it connects to data services. So people can get around the URL filters that block them from doing certain things. Now all of a sudden you have a rogue Wi-Fi access point that doesn’t even exit through the firewall,” said John Pescatore, director of emerging security trends at the SANS Institute.

    CMS systems cumbersome

    So, simply blocking cloud services is problematic. What other options exist? Perhaps content management systems, which manage document workflow throughout an organisation, could help?

    These solutions grant access to documents based on permissions set by administrators, who can set security profiles to enforce access controls. It’s a nice idea, says Kirby-French, but don’t hold your breath.

    “You’re quickly trying to work on a document and you click upload, and it takes 30 seconds for a 10MB PowerPoint presentation. And every time you want to change it you have to check it in and out that takes time,” he said.

    Again, you’re trying to overcome people’s inherent desire to make life as easy as possible, so giving them extra hoops to jump through may not be appropriate.

    A nuanced approach

    A productive approach to managing security in a cloud-based world will be nuanced, involving some give and take between IT departments and users alike. It starts with a basic audit, in which IT departments work out what cloud services are already being used without authorisation.

    You can try and strike an amnesty with departmental managers to find out from them what they are accessing, or you can do your best to mine network logs.

    There are various firms that will discover your network’s exposure to existing cloud services for you. Aside from Skyhigh Networks there’s also Netskope, and Ciphercloud.

    Can any of these unauthorised cloud services stay? One of the key tasks when assessing existing cloud services is to understand their security levels. Start with those that are most often used by employees.

    There are some guidelines you can follow when evaluating these services. The Cloud Security Alliance publishes its Cloud Controls Matrix, which lays out security concepts in 13 different domains, although this may be daunting for smaller businesses.

    “Every organisation makes slightly different rules,” said Hawthorn. “They have a group of people from different departments that get together to define what the minimum standards are for cloud services.”

    Whichever approach the IT department uses to shore up security in a cloud-based world, it will be important to accompany it with a robust and clearly communicated policy.

    Trustmarque estimates that 84 per cent of 2016 British office workers either said that the organisation didn’t have a cloud usage policy, or that they didn’t know if it had one or not.

    Reply
  5. Tomi Engdahl says:

    Gaming the system: exploring the benefits of intranet gamification
    By Melanie Baker – March 10, 2014
    https://www.igloosoftware.com/blogs/inside-igloo/gaming_the_system_exploring_the_benefits_of_intranet_gamification?utm_source=techmeme&utm_medium=referral&utm_campaign=blog

    Gamification isn’t just playing games, and is increasingly becoming a useful corporate tool to increase employee productivity and intranet engagement

    By Melanie Baker – March 10, 2014

    Gamification isn’t just playing games, and is increasingly becoming a useful corporate tool to increase employee productivity and intranet engagement.

    Gamification.jpg
    Gamification is showing up in an increasing number of areas in business, from employee training, to social elements on the corporate website, to the intranet. According to Gartner, by 2015 up to 40% of Global 1000 organizations will be using gamification in business operations. Gamification doesn’t necessarily refer to actually playing games, however. There are many different gamification elements, but not all are relevant to the intranet, so we’ll focus on the ones that are.

    Points, badges, and voting/ranking are three of the most common gamification elements that can be implemented on the intranet.

    Make sure that the requirements to achieve rewards are clear and easy to find, especially for the introductory ones like badges for new member activities.

    An important consideration in designing your intranet’s gamification is the reward parameters and thresholds:

    For what actions will members be rewarded?

    How many points are different activities worth?

    What will the interval be between rewards? (E.g. how many points must be accrued before you earn the next badge.)

    How many levels of rewards will there be?

    Do the rewards have any peripheral effects on member prominence or feature access?

    Are there real world bonuses for accruing points? (E.g. gift cards.)

    According to Gartner, up to 80% of intranet gamification efforts fail due to poor design, so it’s important to work through these considerations before implementing anything.

    Marketing intranet gamification

    Which brings up the question of what to do about those who are less encouraged by public recognition, or who simply don’t want to participate in gamification. It’s important to remember that it’s the participation that’s most important, not the rewards. The main goal is to train and encourage people to be active on the intranet and to contribute regularly. If they get rewards for that, great. Be careful not to create a situation wherein you are at a loss how to encourage people without extrinsic rewards, but they don’t respond to anything else.

    What can intranet gamification do for your company?

    Gamification can help enable staff to learn how to complete tasks, feel good about their achievements, and publicly recognize their successes. For example, to an employee who isn’t normally a writer, completing a first blog post could be a pretty daunting exercise. It’s worth recognizing when the post gets published correctly, the information shared is useful, and other employees engage in discussing it. After all, the employee has created user-generated content, improved the company’s knowledge base, and catalyzed internal communication.

    While you want people to mostly appreciate the feeling of success from doing good work, external recognition is valuable, too. According to Gabe Zicherman, gamification can increase employee productivity by 40%.

    Reply
  6. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google has quietly launched a GitHub competitor, Cloud Source Repositories — Google hasn’t announced it yet, but the company earlier this year started offering free beta access to Cloud Source Repositories, a new service for storing and editing code on the ever-expanding Google Cloud Platform.

    Google has quietly launched a GitHub competitor, Cloud Source Repositories
    http://venturebeat.com/2015/06/24/google-has-quietly-launched-a-github-competitor-source-code-repositories/

    Google hasn’t announced it yet, but the company earlier this year started offering free beta access to Cloud Source Repositories, a new service for storing and editing code on the ever-expanding Google Cloud Platform.

    It won’t be easy for Google to quickly steal business from source code repository hosting companies like GitHub and Atlassian (with Bitbucket). And sure enough, Google is taking a gradual approach with the new service: It can serve as a “remote” for Git repositories sitting elsewhere on the Internet or locally.

    Reply
  7. Tomi Engdahl says:

    AngularJS VS ExtJS – through my experience
    http://www.sagivfrankel.com/2013/10/26/angularjs-vs-extjs-through-my-experience/

    Two frameworks with a very different approach, but both with the same driving force – make the web work for web apps. The difference in approach is that one, chooses to embrace the the web and it’s standards and significantly improve them while the other layers out on top of it and gives you an awesome layer of abstraction.

    When i started working with Ext JS five years ago web development was still pretty hard. Ajax and jQuery along with strong JavaScript advocates like Douglas Crockford, started making JavaScript popular. But two large problems still remained -

    1) making the UI look and behave more like a desktop.

    2) creating a solid application structure(MV*) was still a big challenge.

    That’s when i met ExtJS and i was amazed!

    Both problems were almost solved.

    The ability to program without worrying about cross browser compatibility was a big plus and the handy components delivered with Ext JS were a huge time saver with the star of the show being there awesome grid system binded and re-rendered by a data store, a single source of truth for you data.

    ExtJS4 was a big step forward introduction an MVC paradigm and sass to allow developers to customize their css relatively easy.

    After changing my workplace i was introduced to AngularJS.
    AngularJS is basically a polyfill for the browser to get the basic and critical things you need for a good and solid web app:
    - MV* architecture – Two way binding – Routing – Web components (directives) – Dependency injection – Testable code

    A detailed comparison

    Reply
  8. Tomi Engdahl says:

    Secure Server Deployments in Hostile Territory
    http://www.linuxjournal.com/content/secure-server-deployments-hostile-territory

    Would you change what you said on the phone, if you knew someone malicious was listening?

    Although I always have tried to build secure environments, EC2 presents a number of additional challenges both to your fault-tolerance systems and your overall security. Deploying a server on EC2 is like dropping it out of a helicopter behind enemy lines without so much as an IP address.

    In this article, I discuss some of the techniques I use to secure servers when they are in hostile territory. Although some of these techniques are specific to EC2, most are adaptable to just about any environment.

    Reply
  9. Tomi Engdahl says:

    1 billion Windows 10 PCs by 2017? Yes, really.
    http://money.cnn.com/2015/06/24/technology/windows-microsoft-sales/index.html

    Microsoft has an ambitious goal for Windows 10: The company believes that it can install Windows 10 in 1 billion devices by 2017.

    It sounds insane. But a study of IT professionals released Wednesday suggests that Microsoft should be able to hit that target with ease.

    Spiceworks, an online network of millions of IT professionals, found that nearly three-quarters of businesses plan on installing Windows 10 within two years of the software’s July 29, 2015, release.

    Already 60% of IT departments have tested the new Microsoft operating system, and 40% plan to start rolling out Windows 10 this year.

    That’s in stark contrast to the largely failed Windows 8 release from 2012. Just 18% of companies have deployed Windows 8 or Windows 8.1 on their PCs — the tile-based operating system confused users and never worked quite right for people running Windows on a desktop with keyboards and mice.

    If Spiceworks’ survey is correct, and 73% of businesses adopt Windows 10 by 2017, that would make Windows 10 the most quickly deployed version of Windows in history. Despite its current popularity, just 60% of businesses installed Windows 7 within 24 months of its launch, according to Spiceworks.

    “Microsoft’s stated goal of 1 billion Windows 10 devices in two to three years is achievable, and strong interest from IT buyers bodes well for the entire Windows 10 ecosystem,” said Sanjay Castelino, Spiceworks’ marketing chief.

    Reply
  10. Tomi Engdahl says:

    A new wave of US internet companies is succeeding in China—by giving the government what it wants
    http://qz.com/435764/a-new-wave-of-us-internet-companies-is-succeeding-in-china-by-giving-the-government-what-it-wants/

    Facebook found itself shut out from China in 2009. Twitter got blocked the same year. In 2010, Google pulled its search services from China after a government hack. Beijing, it seems, was sending a message to high-profile American internet companies: play by our rules and censor content, or don’t play at all.

    After Google’s exit, those three firms have yet to come back. But in recent years, other American internet companies have found a degree of success in China—or at least a bit more stability than their predecessors.

    The solution involves sacrifice—hand over data and control, and the Chinese government will hand you the keys to the market.

    Reply
  11. Tomi Engdahl says:

    Does the operating system matter?

    Operating systems are the beginning of time shared computer enthusiasts firmly rooted in their own camps.

    Recently, the number of operating system wars has been reduced. Or at least it should have been: the importance of operating systems is in fact disappearing. In today’s world based on the user begins to be endearingly indifferent, rotating home machine for Windows, OS X or Linux. Most of the basic operations has shifted to computer network. When services are used in a web browser, the underlying operating system is secondary, as long as it offers a number of basic functions.

    The consumer phenomenon is a happy thing. Operating the purchase was prior to costly investment, but today things are different. Apple already announced long ago to offer OS X versions for free. Now, in the same direction is moved to the Redmond software giant Microsoft, the Windows 10 operating system will be offered a free upgrade to a large proportion of users.

    Linux users in the pricing change does not arise. Growth of free competitors weaken the other hand, also an open source competitiveness.

    Source: http://www.tivi.fi/blogit/2015-06-26/Onko-k%C3%A4ytt%C3%B6j%C3%A4rjestelm%C3%A4ll%C3%A4-merkityst%C3%A4-3324741.html

    Reply
  12. Tomi Engdahl says:

    Github’s ‘Atom’ text editor hits version 1.0
    Emacs’ self-appointed heir heads out into the world
    http://www.theregister.co.uk/2015/06/29/githubs_atom_text_editor_hits_version_10/

    Github’s Atom text editor, which it announced back in February 2014, has reached version 1.0.

    Atom’s inspiration was venerable text editor Emacs, but its backers hoped a new start would result in a tool suited to modern web programming. The result is a tool designed from the ground up for coding and customisable in all manner of ways. While the tool is a custom version of Chromium, and you’re always looking at what is actually a web page, Atom behaves like a text editor and is customisable using JavaScript. It can also be used to produce JavaScript, of course, with autocomplete and multi-pane views advanced as features that enhance developer productivity .

    Atom hit beta in May 2014 and its curators say it’s endured 155 releases since and is now available for Windows, Linux and MacOS. It’s already something of a hit: the team behind the tool counts 1.3 million downloads and reckons 350,000 folk use it each month.

    the application is yours for the downloading at Atom.io

    Reply
  13. Tomi Engdahl says:

    HP one of the fairest, claims Gartner’s magic quadrant on the wall
    Dell, HDS non-inclusion could be seen as a distortion
    http://www.theregister.co.uk/2015/06/29/gartner_afa_mq_dell_cisco/

    Gartner has announced that HP has joined the leaders in its all-flash array magic quadrant, while Violin Memory, Nimbus Data and Cisco get demoted, and Dell and HDS are excluded altogether from the MQ because of the research giant’s peculiar classification criteria.

    The MQ classifies suppliers in what Gartner calls the Solid State Array (SSA) market, but which everyone else calls all-flash arrays (AFA).

    Only arrays which have a unique model and ordering number — and which cannot have disk drives added — are included in the Gartner mix, which means Dell and HDS AFAs are excluded.

    Reply
  14. Tomi Engdahl says:

    Tech Mahindra posts profit warning: The end for Indian outsourcing?
    ‘Seasonally weak’ mobility business to blame
    http://www.theregister.co.uk/2015/06/29/tech_mahindra_posts_profit_warning/

    Indian outsourcer Tech Mahindra, which bought disgraced rival Satyam in 2012, has issued a profit warning — the first sign of trouble in the buoyant market for some time.

    In a regulatory filing to the Bombay Stock Exchange, it blamed a “seasonally weak” mobility business for dragging down first-quarter revenue

    Anthony Miller, analyst at TechMarketView, said the warning was noteworthy given the pace of growth in most Indian outsourcing outfits.

    “I can’t remember seeing a profit warning from an Indian pure-play,” he said.

    Reply
  15. Tomi Engdahl says:

    Klint Finley / Wired:
    GitHub has become the go-to centralized repository for software, but its freemium model, like SourceForge’s, is bound to come under pressure

    The Problem With Putting All the World’s Code in GitHub
    http://www.wired.com/2015/06/problem-putting-worlds-code-github/

    The ancient Library of Alexandria may have been the largest collection of human knowledge in its time, and scholars still mourn its destruction. The risk of so devastating a loss diminished somewhat with the advent of the printing press and further still with the rise of the Internet. Yet centralized repositories of specialized information remain, as does the threat of a catastrophic loss.

    Take GitHub, for example.

    GitHub has in recent years become the world’s biggest collection of open source software. That’s made it an invaluable education and business resource. Beyond providing installers for countless applications, GitHub hosts the source code for millions of projects, meaning anyone can read the code used to create those applications. And because GitHub also archives past versions of source code, it’s possible to follow the development of a particular piece of software and see how it all came together. That’s made it an irreplaceable teaching tool.

    GitHub’s pending emergence as Silicon Valley’s latest unicorn holds a certain irony. The ideals of open source software center on freedom, sharing, and collective benefit—the polar opposite of venture capitalists seeking a multibillion-dollar exit. Whatever its stated principles, GitHub is under immense pressure to be more than just a sustainable business.

    When profit motives and community ideals clash, especially in the software world, the end result isn’t always pretty.

    Sourceforge: A Cautionary Tale

    Sourceforge is another popular hub for open source software that predates GitHub by nearly a decade. It was once the place to find open source code before GitHub grew so popular.

    There are many reasons for GitHub’s ascendance, but Sourceforge hasn’t helped its own cause. In the years since career services outfit DHI Holdings acquired it in 2012, users have lamented the spread of third-party ads that masquerade as download buttons, tricking users into downloading malicious software. Sourceforge has tools that enable users to report misleading ads, but the problem has persisted.

    It’s hard to say how many projects have truly fled Sourceforge because of the site’s tendency to “mirror” certain projects.

    But the damage to Sourceforge’s reputation may already have been done.

    No Ads (For Now)

    GitHub has a natural defense against ending up like this: it’s never been an ad-supported business. If you post your code publicly on GitHub, the service is free. This incentivizes code-sharing and collaboration. You pay only to keep your code private. GitHub also makes money offering tech companies private versions of GitHub, which has worked out well: Facebook, Google and Microsoft all do this.

    Still, it’s hard to tell how much money the company makes from this model. (It’s certainly not saying.) Yes, it has some of the world’s largest software companies as customers. But it also hosts millions of open source projects free of charge, without ads to offset the costs storage, bandwidth, and the services layered on top of all those repos.

    Reply
  16. Tomi Engdahl says:

    Danny Sullivan / Marketing Land:
    Microsoft exec says Bing is a multibillion dollar business that pays for itself

    Search Beats Display: Microsoft Says Bing Is Sustainable & Standalone Multibillion Dollar Business
    http://marketingland.com/microsoft-bing-sustainable-standalone-multibillion-133779

    A rare reveal as Microsoft exit the display ads business highlights what a powerhouse Bing search apparently is.

    Reply
  17. Tomi Engdahl says:

    SCOTUS Denies Google’s Request To Appeal Oracle API Case
    http://tech.slashdot.org/story/15/06/29/1756209/scotus-denies-googles-request-to-appeal-oracle-api-case?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    The Supreme Court of the United States has today denied Google’s request to appeal against the Court of Appeals for the Federal Circuit’s ruling (PDF) that the structure, sequence and organization of 37 of Oracle’s APIs (application program interfaces) was capable of copyright protection. The case is not over, as Google can now seek to argue that, despite the APIs being restricted by copyright, its handling amounts to “fair use”.

    Reply
  18. Tomi Engdahl says:

    MIT System Fixes Software Bugs Without Access To Source Code
    http://it.slashdot.org/story/15/06/29/1533204/mit-system-fixes-software-bugs-without-access-to-source-code

    MIT researchers have presented a new system at the Association for Computing Machinery’s Programming Language Design and Implementation conference that repairs software bugs by automatically importing functionality from other, more secure applications.

    Automatic bug repair
    http://newsoffice.mit.edu/2015/automatic-code-bug-repair-0629

    System fixes bugs by importing functionality from other programs — without access to source code.

    At the Association for Computing Machinery’s Programming Language Design and Implementation conference this month, MIT researchers presented a new system that repairs dangerous software bugs by automatically importing functionality from other, more secure applications.

    Remarkably, the system, dubbed CodePhage, doesn’t require access to the source code of the applications whose functionality it’s borrowing. Instead, it analyzes the applications’ execution and characterizes the types of security checks they perform. As a consequence, it can import checks from applications written in programming languages other than the one in which the program it’s repairing was written.

    Once it’s imported code into a vulnerable application, CodePhage can provide a further layer of analysis that guarantees that the bug has been repaired.

    “We have tons of source code available in open-source repositories, millions of projects, and a lot of these projects implement similar specifications,” says Stelios Sidiroglou-Douskos, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who led the development of CodePhage. “Even though that might not be the core functionality of the program, they frequently have subcomponents that share functionality across a large number of projects.”

    With CodePhage, he says, “over time, what you’d be doing is building this hybrid system that takes the best components from all these implementations.”

    To begin its analysis, CodePhage requires two sample inputs: one that causes the recipient to crash and one that doesn’t.

    Automated future

    The researchers tested CodePhage on seven common open-source programs in which DIODE had found bugs, importing repairs from between two and four donors for each. In all instances, CodePhage was able to patch up the vulnerable code, and it generally took between two and 10 minutes per repair.

    As the researchers explain, in modern commercial software, security checks can take up 80 percent of the code — or even more. One of their hopes is that future versions of CodePhage could drastically reduce the time that software developers spend on grunt work, by automating those checks’ insertion.

    “The longer-term vision is that you never have to write a piece of code that somebody else has written before,” Rinard says. “The system finds that piece of code and automatically puts it together with whatever pieces of code you need to make your program work.”

    Reply
  19. Tomi Engdahl says:

    The programmer’s guide to breaking into management
    http://www.infoworld.com/article/2938909/it-careers/the-programmers-guide-to-breaking-into-management.html

    The transition from command line to line-of-command requires a new mind-set — and a thick skin

    Software development, like any career, is divided into leaders and producers. You’re either Steve Jobs, or you’re Woz. Two completely different approaches, and yet both can lead to great success.

    Talented engineers may see managing a team as the next step to growing their careers. So if you’re moving in this direction, what tools do you need to make the transition? We’ll look at some possible approaches, common pitfalls — and offer solutions.

    A first question might be whether to make a change at all. What if a Woz-like existence is more your style? Knowing yourself and whether management is really where you want to land is worth some self-reflection.

    “You have to think about what aspects of the job you really enjoy, and which you try to avoid,” says Adam Wolf, head of engineering for foundational applications at Bloomberg L.P. “If what you really enjoy doing is bringing everyone together to accomplish something as a team, or building a vision and getting everyone behind it, then management is a great opportunity to have a broader impact.”

    The management track begins right where you are, in your current position. It requires taking on more responsibility, reaching out to team members, and making yourself visible.

    “Ask yourself how well you tolerate risk and criticism,” says Hutley, a former CIO at British Telecom, and vice president of innovation at Cisco Systems. “Be honest. Better to be a happy grassroots worker than a miserable leader. That said, stretch yourself. Have the courage to move outside of your comfort zone and take on more responsibility.”

    Managing others will often lead to awkward situations. An exceptional career can be uncomfortable. And good managers are driven by a desire to lead and understand that delivering criticism may influence people, but maybe not win friends.

    “Leadership means making hard decisions on occasions — disagreeing with those who used to be your colleagues — and it can be a lonely place,” Hutley says. “The higher up you go the more certain it is you will fail — in someone’s eyes.”

    But if you’ve never managed people before, how can you know if leading others is a good fit for you? Hutley offers these tips: “Are you one of those who tends to think beyond the immediate task, not just at work but socially as well? Do you suggest a better way of doing things or challenge things when they don’t seem right? Do others seek you out for your thoughts or guidance? If this is you then you are a natural leader — and others recognize it too.”

    If you’re hopeful hard work and attention to detail will speed your way to the top, you may need to broaden your plan. The leap to management will mean a complete redesign of your work life

    You could, of course, apply to an MBA program and complete it online or after work. Public speaking courses can help, say our experts, along with budget training, self-assessments like Myers-Briggs, and training in diversity and inclusion. But there are plenty of opportunities at the office that can help you move in the right direction.

    “Find mentors,” agrees Hutley.

    One of our pros says management offers many of the same challenges and uncertainties as parenting.

    First, get ready for “a complete and total career change,” says executive coach Long. “There are no product specs or algorithms for people. As a manager, your job will be 90 percent about influencing people, which is an inherently illogical task, and dealing with ambiguity in the business while still producing results through others, which is also a task that can’t be done by leaning on logic and reason alone.”

    And now for the really tough part. Are you ready to hand over control and let your team do their jobs?

    Reply
  20. Tomi Engdahl says:

    Startup’s Tech is Intel’s Quark Neural Network
    http://www.eetimes.com/document.asp?doc_id=1326977&

    The pattern-classification technology inside the Quark SE system chip from Intel is the same as that being offered for license and in chip form by startup NeuroMem Inc. (Petaluma, Calif.).

    The Quark SE is the system-chip on Intel’s button-sized Curie module that was launched at the Consumer Electronics Show. Intel said at the time that the QuarkSE chip, a processor developed for wearable applications, included a pattern classification engine that allows it to identify different motions and activities.

    “Yes, the pattern matching/classification engine inside the Quark SE is an implementation of our technology,”

    “We can license our IP to semiconductor companies for integration in their SoC or we can license to OEM customers doing their own SoCs or FPGAs. NeuroMem also sells standard ICs, boards and development tools that our customers use to build their systems,” Lambinet said.

    He added that up until now sensor-based peripherals had to be connected to a smartphone or to the cloud to perform any useful classification. With the NeuroMem technology deployed at the end-point node the sensing unit can become autonomous and does not need to consume bandwidth and power to transmit unfiltered data.

    The NeuroMem technology can also be deployed elsewhere in the network because it scales well, Lambinet said. “We recognise one face out of millions just as fast as we recognise one face out of 1000. CPU/GPU solutions are extremely fast for small and medium datasets but they slow down dramatically as soon as they reach the limit their parallelism. At some point, they become sequential as all the cores have to share memory bandwidth. Our technology does not have this limitation because the computing happens inside the memory.”

    The CM1K is a chain of 1,024 identical neurons that operate in parallel but are connected together to make global decisions.

    Reply
  21. Tomi Engdahl says:

    Cloud-Based Backup and Disaster Recovery Is a Win-Win for Business
    http://www.cio.com/article/2942073/disaster-recovery/cloud-based-backup-and-disaster-recovery-is-a-win-win-for-business.html

    New models present a compelling alternative for business continuity

    The cloud is pretty much a win-win when it comes to business continuity. First, a cloud service structurally is a mesh of redundant resources scattered across the globe. If one resource should become unavailable, requests re-route to another available site. So from a high-availability standpoint, everyone benefits.

    That’s why classes of “as a service” models are emerging for backup and recovery. Backup as a service (BaaS) and disaster recovery as a service (DRaaS) resonate particularly well with smaller, growing businesses that may not have the budgets for the equipment and real estate required to provide hot, warm, or even cold backup facilities and disaster recovery sites. The cloud itself becomes “the other site” – and you only pay for the “facilities” when you use them because of the cloud’s inherent usage-based pricing model.

    The global DRaaS market is forecast to grow by 36 percent annually from 2014 to 2022, according to Transparency Market Research. Cloud-based backup and DR makes it easy to retrieve files and application data if your data center or individual servers become unavailable. Using the cloud alleviates the threat of damage to or theft of a physical storage medium, and there’s no need to store disks and tape drives in a separate site.

    Cloud-based disaster recovery services eliminate the need for site-to-site replication

    Reply
  22. Tomi Engdahl says:

    How Computer Science Education Got Practical (Again)
    http://news.slashdot.org/story/15/06/30/016225/how-computer-science-education-got-practical-again

    In the 1980s and 1990s, thousands of young people who had grown up tinkering with PCs hit college and dove into curricula designed around the vague notion that they might want to “do something with computers.” Today, computer science education is a lot more practical — though in many ways that’s just going back to the discipline’s roots. As Christopher Mims put it in the Wall Street Journal, “we’ve entered an age in which demanding that every programmer has a degree is like asking every bricklayer to have a background in architectural engineering.”

    For programmers
    Theory, practice, and fighting for terminal time: How computer science education has changed
    http://www.itworld.com/article/2941286/careers/theory-practice-and-fighting-for-terminal-time-how-computer-science-education-has-changed.html

    When it comes to learning programming, some things have changed — but not everything

    In 1950, fifty-one people attended the Summer School on Programme Design for Automatic Digital Computing Machines at Cambridge University.
    the students who came to Cambridge that summer were the first to sign up to specifically learn the art on Cambridge’s EDSAC computer.

    “When I started, in ’81, the university had around 12 terminals available for the CS department, hooked to the single mainframe the university owned. Only seniors and grad students were allowed to use them.”

    Just five years later, when Pierce took his first class in 1986, his hardware environment was quite different: a lab full of Apple IIs.

    “One big change is the expectation that everyone has their own computer.”

    When I was taking computer classes in high school in the late 1980s, we discussed transistors and logic gates, not that I really remember much of it or ever fully grasped how it related to programming a computer.

    Nancie K. may have been working on assembly language code on in the early 1980s, but in Rob Pierce’s experience, modern-day classes are quite different. “C/C++ has been replaced by higher-level VB and Java,” he says, “and ‘while’ and ‘for’ loops are taught long before stacks and pointers.”

    Beyond the nuts and bolts of what specifically you’d study and what machines you’d use to study it on, there’s a bigger question looming over the field: why would you bothering studying the subject at all?

    “I feel like whoever was designing the curriculum was vaguely aware that people taking the class might want to do ‘stuff with computers’ in the future and didn’t feel the need to try to tie it to any other discipline.”

    The class Dr. Carlson teaches now is called “programming for engineers,” and is much more aimed at practical use. The language it’s based on is MATLAB, “a numerical programming language that’s fairly popular with academia and engineers.”

    In fact, the practical needs of both students and employers have given rise to a whole category of computer science education under the aegis of schools that aren’t colleges at all. These “code schools” are aimed at eschewing theory and giving students practical skills in a short amount of time.

    While MOOCs like Udacity have made sweeping claims that they’ll replace universities, Parker doesn’t see CS degree programs going away anytime soon. But courses like his company offers are a useful supplement. “Most employers still want CS grads with five years (real world) experience.”

    Computer Programming Is a Trade; Let’s Act Like It
    That Would Help Offset Supply-and-Demand Mismatch
    http://www.wsj.com/articles/computer-programming-is-a-trade-lets-act-like-it-1407109947

    If you’re a young person who is thinking about becoming a computer programmer but can’t afford college, you might think about skipping college altogether, says Ryan Carson, co-founder of an online coding school.

    And he isn’t alone.

    Reply
  23. Tomi Engdahl says:

    RISC vs CISC: What’s the Difference?
    Analysis of ARM, X86, MIPS designs shows no difference
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327016&

    A new study comparing the Intel X86, the ARM and MIPS CPUs finds that microarchitecture is more important than instruction set architecture, RISC or CISC.

    If you are one of the few hardware or software developers out there who still think that instruction set architectures, reduced (RISC) or complex (CISC), have any significant effect on the power, energy or performance of your processor-based designs, forget it.

    Ain’t true. What is more important is the processor microarchitecture — the way those instructions are hardwired into the processor and what has been added to help them achieve a specific goal.

    This is the over-arching conclusion of a study recently published in the ACM Transactions on Computer Systems. In the paper, “ISA Wars: Understanding the Relevance of ISA being CISC or RISC,” authors Emily Blem, Jakrishnan Menon, Thiruvengadam Vijayaraghavan, and Karthhikeyan Sankaralingam, report the results of a study over the last four years or so by the University of Wisconsin (Maidison) Vertical Research Group(VRG).

    Reply
  24. Tomi Engdahl says:

    VMware will pay the government $75.5 million to settle an overcharging lawsuit
    http://uk.businessinsider.com/vmware-pays-755-million-to-settle-gsa-overcharging-lawsuit-2015-6?op=1?r=US

    VMware and one of its reseller partners will pay $75.5 million settlement to the General Services Administration (GSA) to settle a lawsuit alleging that the virtualization software company defrauded the federal government.

    VMware is one of the largest software companies in the world, with more than $6 billion in revenue last year.

    The suit was originally filed in 2010, and remained sealed by the Department of Justice until the settlement was finalized earlier today. Cotchett, Pitre, & McCarthy, the prosecuting law firm in this case, calls it “one of the five largest recoveries against a technology company in the history of the False Claims Act” in a press release.

    “In short, the lawsuit alleged the government paid more than private companies for the same services.”

    Reply
  25. Tomi Engdahl says:

    PowerShell for Office 365 powers on
    Web-based CLI is yours for the scripting
    http://www.theregister.co.uk/2015/07/01/powershell_for_office_365_powers_on/

    Microsoft has powered on PowerShell for Office 365.

    Redmond promised the tool back at its Ignite conference, and on Tuesday decided all was ready to take it into production.

    Anyone familiar with PowerShell probably won’t be in the slightest bit shocked by the tool, which offers a command line interface with which one can initiate and automate all manner of actions. Redmond’s created a script library to help you do things like add users, control licences or stop people from recording Skype meetings.

    There’s a few hoops through which to jump before you can start having that kind of fun

    PowerShell is found in just about every Windows admin’s toolbox, so bringing it to Office 365 looks like a very sensible decision by Microsoft as it keeps an important constituency happy. It should also make the cloudy suite easier to operate, therefore keeping costs low.

    Reply
  26. Tomi Engdahl says:

    Want to spoil your favourite storage vendor’s day? Buy cloud
    Leaving the premises might just work
    http://www.theregister.co.uk/2015/07/01/cloud_as_secondary_storage_on_premises/

    Organisations continue to buy storage. In fact, I was talking to a storage salesman not so long ago who was telling me that one of his customers regularly calls asking for a quote for “a couple more petabytes.”

    However, on-premise storage is not the end of the story. Yes, you need to have storage electronically close (with minimal latency) to your servers, but procuring on-premise storage needs more than cash. It needs power, space and support.

    You can’t keep buying more and more storage because power and data centre space are extremely limited.

    And, even if your data centre does have the space, you often can’t get the new cabinets next to your existing ones so you end up dotting your kit all over the building (with the interconnect fun that implies).

    If part of your data storage requirement can live with being offline then you have the option of writing it to tape – which, in turn, brings the problem of managing a cabinet or two full of tapes.

    Leaving aside the fact that they degrade over time if not kept properly, there’s always the issue with tape technology marching on (which means you have to hang onto your old tape drives and keep them working, just in case).

    Throw it somewhere else?

    So is there mileage in putting your data somewhere else – specifically in the cloud? In a word, “yes”. To take just one of many possible examples, Amazon’s Glacier storage costs one US cent per GB per month, which means you can keep 100TB for a year for a shade under £5,000 per annum.

    Well, for the same 100TB of storage you’d be looking at a smidge over £18,000 on Amazon for their reduced-redundancy option – which, presumably, is fine as it’s your secondary and you have a live copy.

    Sellers can’t ignore new markets…

    Vendors of on-premise storage are unsurprisingly also looking to sell you stuff that will enable you to use cloud storage: after all, given that they’re not getting revenue from flogging disks to you, they may as well find ways of extracting your cash by selling cloud-enabling products.

    What do we mean by “secondary?”

    Secondary storage might simply mean a duplicate copy of your core data, which you retain in the cloud in case the primary entity is corrupted, deleted or destroyed. You have choices of how you get the data to the cloud, depending on how immediately accessible you want it:

    Backups: instead of using local disk or tape drives you point your backup software or appliances at the cloud storage area. This is fine if you’ll only need to pull back lost files occasionally and you don’t mind having to do file restores on an ad-hoc basis via the backup application’s GUI
    File-level copies: you replicate data to the cloud storage using a package that spots new files and changes and replicates in near real time (if you’ve ever used Google Drive on your desktop, you’ll know the kind of thing I mean, but we’re talking about the fileserver-level equivalent in this context)
    Application-level: you run your apps in active/passive mode using their inherent replication features – for instance a MySQL master on-prem and an equivalent slave in a VM in the cloud. Actually, this isn’t really storage replication, as the data flying around is application data, not filesystem data

    The second of these three is the common desire: a near-real-time remote copy of large lumps of data. Yes, you’ll often have a bit of the other two but these (particularly app-level replication) tend to represent a minority of the data you’re shuffling.

    Secondary = temporary?

    The other side of secondary storage in the cloud is where you’re one of those companies that genuinely uses the cloud for short-term, high-capacity compute requirements.

    One of the benefits the cloud vendors love to proclaim from atop a convenient mountain, of course, is the idea of pay-for-what-you-use scenarios: running up loads of compute power for a short-term, high-power task then running it down again.

    Does the average business care? Nah – all this stuff about: “Oh, you can hike up your finance server’s power at year-end then turn it down again” is a load of old tosh in most cases. But there are in fact plenty of companies out there with big, occasional requirements – biotech stuff, video rendering, weather simulation, and so on – so real examples are far from non-existent.

    Going the other way

    Another thing you need to remember is that there may well be a time when you want to pull data back from the secondary store into the primary. There are a couple of considerations here: one is that many of the cloud storage providers don’t charge for inbound data transfers (i.e. data flowing into the cloud storage) but they have a per-gigabyte fee for transfers the other way.

    De-duplication is the order of the day in such cases, but in reality the transfer costs are modest (and sometimes they’re free – such as restores from Amazon Glacier).

    The cool bit is that when you combine it with their physical appliances on-premise you can layer a global namespace over the whole lot so that local servers and remote VMs can access volumes transparently. And this means that cloud-based servers can mount on-premise volumes in just the same way as in-house machines can access storage in the cloud.

    Oh, and where’s the primary?

    So we’ve talked about using the cloud as your secondary storage, and we’ve largely assumed that the primary will be on-prem. But does it have to be?

    I mentioned that data transfer out of the cloud is generally chargeable, but it’s also true that data transfer out of a particular cloud provider’s storage into another repository in another of their regions is significantly cheaper (less than a quarter of the price in an example I just checked out) than flinging it out of their cloud to your premises via the net.

    Summing up

    So there you have it. Secondary storage in the cloud is definitely feasible, but you’ll want to use some kind of access appliance to optimise throughput.

    Once you do go down this cloud route, treat your primary and secondary storage as a single entity, so that each end can access the other equally easily.

    And, finally, when designing that cloud-based secondary storage don’t forget to think about where the primary volumes should live too.

    Reply
  27. Tomi Engdahl says:

    Google’s artificial-intelligence bot says the purpose of living is ‘to live forever’
    http://uk.businessinsider.com/google-tests-new-artificial-intelligence-chatbot-2015-6

    This week, Google released a research paper chronicling one of its latest forays into artificial intelligence.

    Researchers at the company programmed an advanced type of “chatbot” that learns how to respond in conversations based on examples from a training set of dialogue.

    And the bot doesn’t just answer by spitting out canned answers in response to certain words; it can form new answers from new questions.

    Read more: http://uk.businessinsider.com/google-tests-new-artificial-intelligence-chatbot-2015-6#ixzz3eeBy1cd4

    Reply
  28. Tomi Engdahl says:

    Gartner lowers its IT spending forecast, but says activity remains high
    http://www.cio.com/article/2942414/gartner-lowers-its-it-spending-forecast-but-says-activity-remains-high.html

    Worldwide IT spending is expected to decline by 5.5 percent this year, with enterprises benefitting from lower prices on communications and IT services but also having to pay higher hardware prices in some parts of the world.

    Market research company Gartner revised its spending forecast downward on Tuesday: In April, it said IT spending in 2015 would decline 1.3 percent compared to last year.

    But numbers can sometimes be deceptive; IT activity is stronger than the spending indicates, according to John-David Lovelock, research vice president at Gartner. Price declines in segments like communications and IT services, and the move to cloud-based services, mask an increase in activity, he said.

    However, the strong dollar is resulting in price hikes on hardware, which is having a negative affect on spending.

    For example, PC vendors selling to Europe and Japan, where local currencies have fallen since the start of the year, have little choice but to raise prices to preserve profits, according to Gartner. As a result, large organizations will keep their PCs longer rather than buy less expensive models or remove requirements for key features, Gartner said earlier this year.

    All this means worldwide IT spending is now on pace to total US$3.5 trillion this year, the company said.

    Reply
  29. Tomi Engdahl says:

    Now it’s official: the HP will be split in half

    HP has filed formal notice of registration HP Enterprise Group as a separate company. The official announcement was made on Tuesday, the US financial supervisory authorities.

    Given in the notice, the company focused on the business unit made a profit last year to $ 1.6 billion, 55.1 billion of net sales. In the previous year, the readings were better, 2.1 billion to 57.4 billion dollars.

    The remaining two separate companies are roughly the same size. The first one focuses the company’s business and the other PC and printer trade.

    Split is expected to be completed by the first day of November.

    Source: http://www.tivi.fi/Kaikki_uutiset/2015-07-02/Nyt-se-on-virallista-HP-laitetaan-kahtia-3325069.html

    10 Things You Don’t Know About The HP Split
    http://www.crn.com/slide-shows/data-center/300076419/10-things-you-dont-know-about-the-hp-split.htm

    Reply
  30. Tomi Engdahl says:

    Qt 5.5 Released
    http://blog.qt.io/blog/2015/07/01/qt-5-5-released/

    We have invested lots of efforts to make sure Qt 5.5 is ready for Windows 10 once it gets officially released by Microsoft.

    Linux packages are now being built on RedHat Enterprise Linux, allowing to cover a wider range of Linux distributions (from RHEL 6.6 up to Ubuntu 15.04) with one set of binaries.

    Another change coming with Qt 5.5, is a greatly simplified product structure. There are now three versions of Qt available.

    Qt for Application Development is our commercial offering that allows you to create applications for all desktop and mobile platforms that Qt supports. It comes with full support and our flexible commercial licensing.

    Qt for Device Creation is the second commercial product. It targets the creation of embedded devices, and comes with a lot of tooling and support to make this as easy as possible. Of course with full support and our flexible commercial licensing as well.

    And finally, we have Qt Open Source, our open source version that you can use under the terms of the LGPL (version 2.1 and/or version 3) or GPL.

    Reply
  31. Tomi Engdahl says:

    Microsoft To Launch Minecraft Education Portal For Teachers
    http://games.slashdot.org/story/15/07/01/1826254/microsoft-to-launch-minecraft-education-portal-for-teachers

    Microsoft wants to help educators use Minecraft to teach pupils about maths, history, creative design and other subjects and skills, claiming the game is already being used in classrooms in the US and UK. Minecraft developer Mojang was bought by Microsoft last year for $2.5 billion

    Microsoft Sees Minecraft As Learning Tool For Schools
    Read more at http://www.techweekeurope.co.uk/projects/public-sector/minecraft-microsoft-schools-teachers-resource-171647#RbCpXCCe7rSuEoxi.99

    Reply
  32. Tomi Engdahl says:

    Windows 7 and 8.1 market share surge, XP falls behind OS X
    It looks like the world wants freebie Windows 10 upgrades
    http://www.theregister.co.uk/2015/07/02/windows_7_and_81_market_share_surge_xp_falls_behind_os_x/

    Why the jump? Microsoft’s signalled that Windows 7 users will be among those who, under some circumstances, get a free Windows 10 upgrade. Adopting Windows 7 in June was therefore a good idea. Windows 8.1 also registered a bump in market share, so perhaps the same motive drove that increase in adoption.

    Both of the services we track also noted sharp-ish dips for Windows XP: might the rush to abandon it finally be accelerating now that the freebie Windows 10 is near?

    Reply
  33. Tomi Engdahl says:

    New racks, cables for aging and neglected data centres
    Time for a power trip
    http://www.theregister.co.uk/2015/07/02/aging_neglected_datacenters_set_for_upgrades/

    Enterprises are shuttering their smaller data centres, but are opting to shift to larger upgraded facilities rather than shifting the whole shebang to the cloud.

    451 Research’s Voice of the Enterprise: Datacenters survey found that in Q2 a whacking 87 per cent of data centre operators in Europe and North America are maintaining or racking up their data centre spending. A quarter are planning to jack up spending within the next quarter.

    Of those companies looking to increase spending, 37 per cent are looking to retrofit or upgrade existing data centres or projects. Almost two thirds of organisations would rather consolidate IT infrastructure than build out a new data centre.

    Likewise, companies hitting the magic number of 75 per cent data centre utilisation are more inclined to look to colocation and cloud providers than build a new data centre.

    Reply
  34. Tomi Engdahl says:

    The case against Open Compute Project Storage flotation
    OCP-S caught in no-man’s land between enterprise and hyper-scale
    http://www.theregister.co.uk/2015/07/02/open_compute_project_storage_flotation_questionable/

    Did you know there was a storage part of the Open Compute Project? If not, you do now.

    The Facebook-generated OCP aims to make good, basic hardware available for data centres at low cost, with no bezel tax and no unwanted supplier differentiation justifying high prices. Its main focus is servers, but that’s not all, as there is also a storage aspect.

    The OCP-S project covers:

    Cold Storage
    Fusion-io
    Hyve – “Torpedo” 2 x OpenU storage server that can accommodate 15 3.5″ drives in a 3 x 5 array
    OpenNVM – Open-source project for creating new interfaces to non-volatile memory
    OpenVault – 30 drives in 2U

    These seem to be limited use cases; JBODs or disk drawers, flash and archive storage.

    Web access via the hot links provided for each category is variable. Neither of the Fusion-io links for the specification and CAD models work.

    El Reg: What do you understand the status of the OCP storage (OCP-S) initiative to be?

    Michael Letschin: While a work in progress, the lack of storage industry support means the OCP-S concept is still very much a pipe dream for all but the largest webscale companies. For customers to be considering the move, it’s fair to say that they will have to have taken the leap and embraced software-defined storage (SDS) as a starting point.

    El Reg: Do you think there is a need for it?

    Michael Letschin: The concept behind the Open Compute project is completely worthwhile, but though it brings the promise of true commodity hardware to the forefront, it hinges on whether systems can be integrated easily into the current data centre.

    El Reg: Is storage flash and disk hardware developing so fast that OCP-S cannot keep up?

    Michael Letschin: No. The interfaces for these drives are still much the same, so as to allow for integration into existing infrastructures. There is no reason that OCP-S would be any different.”

    El Reg: Is storage driven so much by software that interest in OCP-S (hardware-based) items on their own is low?

    Michael Letschin: Given scale-up solutions are still the norm for enterprises, the concept of a single head storage server is not of much interest today. As scale-out becomes more commonplace, the OCP-S hardware will understandably become more appealing: the pieces become modules that are essentially just bent metal.”

    Michael Letschin: In today’s environments, yes. OCP-S assumes scale-out and this makes sense for the likes of Facebook, but it’s still early days for software-defined scale-out in the enterprise market. For example, the Open Rack standard is designed with new data centres in mind.

    Reply
  35. Tomi Engdahl says:

    Jordan Kahn / 9to5Mac:
    You can now use any Android app on your Mac w/ BlueStacks App Player
    http://9to5mac.com/2015/07/01/play-mobile-android-apps-mca-bluestacks-app-player/

    BlueStacks App Player

    Crossy Road on Mac with BlueStacks

    BlueStacks, a free desktop Android emulator that lets users play any mobile game or app on the big screen with a mouse and keyboard, has mostly been limited to PC users until today. But Mac users are about to get access to the software that the company says already has around 90 million users on Windows.

    http://www.bluestacks.com/

    Reply
  36. Tomi Engdahl says:

    John Callaham / Windows Central:
    Microsoft confirms its new Edge browser won’t support Silverlight
    http://www.windowscentral.com/microsoft-confirms-its-new-edge-browser-wont-support-its-silverlight-player

    Microsoft has already announced that its new Microsoft Edge web browser in Windows 10 will not be using many of the features that were a part of its old Internet Explorer browsers. That includes support for ActiveX-based plug-ins. Today, Microsoft confirmed that the ditching of ActiveX also means Edge won’t support the company’s own Silverlight web-based media player.

    Silverlight was first introduced in 2007 as an alternative Adobe’s Flash player for web-based media. It was most famously used by Netflix for its desktop streaming video service. The last major release was Silverlight 5 in 2011 and Microsoft has not indicated plans to release a major new version.

    Most sites have now abandoned Silverlight and Netflix is transitioning its web player to HTML5.

    Reply
  37. Tomi Engdahl says:

    Robert McMillan / Wall Street Journal:
    Hewlett-Packard officially files to split, details its two new companies, HP Inc. and Hewlett Packard Enterprise

    Hewlett-Packard Officially Files to Split
    Tech giant details its two new companies, HP Inc. and Hewlett Packard Enterprise
    http://www.wsj.com/article_email/hewlett-packard-officially-files-to-split-1435783640-lMyQjAxMTA1NjAyMTIwMTE0Wj

    Reply
  38. Tomi Engdahl says:

    Microsoft: Stop using Microsoft Silverlight. (Everyone else has)
    Says websites should switch to HTML5-based playback as netizens snub plugins
    http://www.theregister.co.uk/2015/07/02/microsoft_silverlight/

    Microsoft is encouraging companies that use its Silverlight media format on their web pages to dump the tech in favor of newer, HTML5-based media playback systems.

    “The commercial media industry is undergoing a major transition as content providers move away from proprietary web plug-in based delivery mechanisms (such as Flash or Silverlight), and replace them with unified plug-in free video players that are based on HTML5 specifications and commercial media encoding capabilities,” the software giant said in a Thursday blog post.

    Similarly, Redmond observed, browser makers are moving away from supporting media plugins. Google plans to drop support for the outdated Netscape Plugin API (NPAPI) later this year, while Microsoft Edge, the new browser that will ship with Windows 10, was designed not to support plugins from the get-go.

    One reason is because vulnerabilities in media plugins often become vectors for web-based attacks, something to which Silverlight fell prey last year.

    Instead, Microsoft and others now recommend that web developers handle video and other media playback via a number of new protocols introduced in the ongoing HTML5 standardization effort.

    Reply
  39. Tomi Engdahl says:

    Jaguar Land Rover calls it-and hostess more flexibility

    Car manufacturer CTO keep the graying workforce biggest obstacle to the transition towards a supple IT-infrastructure.

    The renowned British car manufacturer Jaguar Land Rover Chief Technology Officer Anthony Headlam takes talks on your IT department workers to conservatism and resistance to change.

    Last week in London in Cloud World Forum event occurred in Jaguar Land Rover’s CTO Anthony Headlam says report seeing every day the older it-people resistance to change.

    This is especially true for the support functions of the business, the so-called back office -tasks.

    “I want to accelerate the cycle of IT systems, so that the software is updated monthly Faster, more pliable IT systems are JLR. The lifeline, especially now that the company has moved its production for the first time in Great Britain outside.”

    As with other car manufacturers, including Jaguar Land Rover to develop intelligent cars and their production methods in new mills.

    “We have the IT department use up to 1 700 different software applications, the oldest of which are from the 1970s,”

    In 2013, Jaguar Land Rover’s CTO appointed KSI Headlam says that in his first year as much as 90 percent of the time IT department took mere operation of existing systems maintenance. Today, the maintenance of the ratio has been reduced by quarter.

    It-mindedness and hostess are required

    The transfer of production abroad, and the new, intelligent vehicles related to manufacturing processes still require more and more flexible, suitable for industrial IT systems for the Internet.

    “Today, all our cars are networked. This means that we are constantly customer data from the use of cars and the need for care in different parts of the world. In this kind of big-data and analytics needed for cloud services,” says Headlam.

    “And the management of such amounts of data, we can not resort to the 1970s or from the technology, which may have been gradually updated in the 2000s, 80′s,” says Headlam shall go through the company’s IT renovation challenges.

    Source: http://www.tivi.fi/Kaikki_uutiset/2015-07-03/Jaguar-Land-Rover-vaatii-it-v%C3%A4elt%C3%A4-enemm%C3%A4n-joustavuutta-3325126.html

    Jaguar Land Rover: staff are biggest obstacle to IT transformation
    http://www.cloudpro.co.uk/cloud-essentials/5175/jaguar-land-rover-staff-are-biggest-obstacle-to-it-transformation

    CTO says greying IT workforce are stubborn in face of new agile approach

    People are the biggest obstacle Jaguar Land Rover must overcome to implement an agile methodology that will help modernise its IT infrastructure.

    Reply
  40. Tomi Engdahl says:

    Intel’s first Skylake chips coming in August
    http://www.computerworld.com/article/2943204/computer-processors/intels-first-skylake-chips-coming-in-august.html

    Fancy a Mac or Windows 10 PC with Intel’s new processors code-named Skylake? That will soon be possible: Intel will launch its first chips based on the new architecture in the first week of August.

    The first Skylake chips will be high-end gaming processors that can be overclocked, and will be launched on or before August 7 during the Gamescom conference in Cologne, Germany, which runs from August 5 to 9, according to a source familiar with Intel’s plan.

    Intel declined to comment on the launch of Skylake chips. But one can expect Skylake chips for mainstream desktops and laptops to come in quick order during or after IDF.

    Intel’s goal with Skylake is to make PC usage more convenient. With that in mind, Intel has talked about “wire-free” technologies in Skylake so PCs could charge and transfer data to peripherals wirelessly.

    Dell, Hewlett-Packard and Asus will ship Windows 10 PCs based on Skylake in the second half this year.

    Skylake also has new virtualization, boot, system management and lockdown features, which will be detailed in technical sessions.

    Skylake will take on AMD’s chips code-named Carrizo, which are now reaching PCs. AMD is rushing to release its next-generation chips based on a CPU core code-named Zen, which will be in PCs next year.

    Reply
  41. Tomi Engdahl says:

    A looksee into storage upstart Hedvig’s garage
    Quick dip under a distributed storage platform’s covers
    http://www.theregister.co.uk/2015/07/02/a_looksee_into_hedvigs_storage/

    Hedvig decloaked from stealth recently, saying it’s producing converged block, file and object storage for enterprises with Facebook/Amazon-level scale, costs and efficiencies.

    It does this through its DSP (Distributed Storage Platform) software

    Founder and CEO Avinash Lakshman dismissed most storage innovation over the past decade as mere tweaking. “Amazon and Facebook” he said, “gave me the experience to enable a fundamental innovation for storage.” That experience included building Amazon’s Dynamo NoSQL precursor, and Cassandra for Facebook.

    Facebook Cassandra had 100 million users on petabyte-scale cluster operated by just 4 people, including Avinash. So he can build extreme scale-out storage systems that run reliably on commodity hardware and need sparse administration efforts.

    A cluster of commodity servers, with local flash and disk media, and Hedvig distributed storage SW in a nutshell.

    What we have is a 2-layer piece of software with a Hedvig Storage Service (HSS) being the media interface, and the upper layer is split into a Hedvig Storage Proxy and a set of Hedvig APIs. Pretty straightforward so far, except that the servers can be either X86 or ARM-based.

    We were told by Hedvig marketing veep Rob Whitely: “Facebook and Amazon are marching towards ARM and flash.”

    One platform replaces the need for separate block, file, object, cloud, replication, backup, compression, deduplication, flash, and caching equipment. Each capability can be granularly provisioned in software on a per-Virtual Disk basis to suit your unique workloads.

    The HSS provides a data management service with self-healing, clustering, and advanced storage capabilities. Its data persistence service maintains state and tracks the health of cluster nodes. It forms an elastic cluster using any commodity server, array, or cloud infrastructure.

    The Hedvig Storage Proxy presents block, file, or object via a VM or Docker container to any compute environment.
    Whitely said: “Everything is thinly provisioned. We offer inline dedupe, compression (both global), snapshots, and clones. Turn on and off by app.” He told us Nutanix inline dedupes by default and so can waste resources deduping stuff that can’t be deduped.

    Hedvig claims the DSP is a superset of current storage styles such as virtual SANS, software-defined storage, hyper-converged system storage and storage arrays. It “seamlessly bridges and spans private and public clouds, easily incorporating cloud as a tier and DR (disaster recovery) environment.”

    I think we can take it that Hedvig can build this software product. Whether enterprise storage customers are ready for it is another matter. Suggested use cases are storage for any hypervisor in a server virtualization environment, private cloud, and Big Data – pretty generic.

    There are few greenfield storage deployments, and Hedvig is banking on the cost and admin complexity pain of existing storage deployments encouraging enterprises to look its way and try out its software in a POC or pilot.

    Reply
  42. Tomi Engdahl says:

    Micron fires off some in-memory flash data rockets
    New unit will go beyond selling chips and standard components
    http://www.theregister.co.uk/2015/06/26/microns_inmemory_flash_data_rockets/

    Micron’s Storage Business Unit (SBU) wants to shake up the server status quo with a dynamic upstart duo: in-memory app accelerating data processing rockets and instant access, cold data flash vaults.

    How it’s going to do that begins with its NAND chip development plans, which we learned about at a Silicon Valley briefing. It starts at the chip level.

    Reply
  43. Tomi Engdahl says:

    Tablet Shipments on the Wane
    http://www.eetimes.com/document.asp?doc_id=1327065&

    Tablet shipments are expected to decline in 2015 as the market loses momentum amid saturation among consumers in developed regions, according to analysts.

    International Data Corp. (IDC) forecasts that shipments of tablets and 2-in-1 devices will decline to 221.8 million this year, down 3.8% from 2014.

    Other market research firms are similarly pessimistic about the tablet market in 2015. IHS Technology predicts that the tablet market will decline in 2015 and remain flat in 2016 before returning to growth in 2017.

    According to ABI Research, tablet shipments declined in the first quarter by 35% compared with the fourth quarter of 2014 and by 16% compared to the first quarter of 2014.

    ”People with the wherewithal to buy their first tablets have done so,” said Jeff Orr, a senior practice director at ABI. Orr said tablet shipments may never again enjoy the growth rates they enjoyed when the products first emerged, but noted that tablets are still a strong force in the computing market.

    The rise of larger smartphones is also contributing to the decline of tablet shipments, according to analysts.

    Reply
  44. Tomi Engdahl says:

    Google releases Material Design Lite, a web framework for making Material Design-style websites
    http://9to5google.com/2015/07/06/material-deisgn-lite-release/

    Google Developers, the team at Google which creates tools and learning materials for developers to take advantage of, has released a front-end web framework for building sites to the Material Design specification

    The new framework, called Material Design Lite (MDL), includes Material Design-style components – like buttons, checkboxes, input fields, custom typography, and more – as well as a responsive grid and breakpoints (i.e. what happens when the window gets too narrow to display all elements side-by-side) that adhere to the Material Design adaptive UI guidelines. Google’s guidelines for how an app or website using Material Design reflows content at different screen sizes and as a screen resizes in real-time make for visual consistency across a range of devices of all shapes and sizes. The company says MDL is tailored towards websites heavy on text, like blogs and marketing pages.

    Anyone who has used the Bootstrap web framework will understand MDL right away.

    Introducing Material Design Lite
    getmdl.io -a library of components & templates in vanilla CSS, HTML and JS
    https://medium.com/google-developers/introducing-material-design-lite-3ce67098c031

    Back in 2014, Google published the material design specification with a goal to provide guidelines for good design and beautiful UI across all device form factors. Today we are releasing our effort to bring this to websites using vanilla CSS, HTML and JavaScript. We’re calling it Material Design Lite (MDL).

    Reply
  45. Tomi Engdahl says:

    The future of artificial intelligence: Myths, realities and aspirations
    http://blogs.microsoft.com/next/2015/07/06/the-future-of-artificial-intelligence-myths-realities-and-aspirations/

    Only a few years ago, it would have seemed improbable to assume that a piece of technology could quickly and accurately understand most of what you say – let alone translate it into another language.

    A new wave of artificial intelligence breakthroughs is making it possible for technology to do all sorts of things we at first can’t believe and then quickly take for granted.

    And yet, although they can do some individual tasks as well as or even better than humans, technology still cannot approach the complex thinking that humans have.

    “It’s a long way from general intelligence,” Bishop said.

    The latest breakthroughs in artificial intelligence are the result of core advances in AI, including developments in machine learning, reasoning and perception, on a stage set by advances in multiple areas of computer science.

    Computing power has increased dramatically and has scaled to the cloud. Meanwhile, the growth of the Web has provided opportunities to collect, store and share large amounts of data.

    There also have been great strides in probabilistic modeling

    The new capabilities also are coming from advances in specific technologies, such as machine learning methods called neural networks, which can be trained from massive data sets to recognize objects in images or to understand spoken words.

    Another promising effort is “integrative AI,” in which competencies including vision, speech, natural language, machine learning and planning are brought together to create more capable systems, such as one that can see, understand and converse with people.

    “We see more and more of these successes in daily life,” Horvitz said. “We quickly grow accustomed to them and come to expect them.”

    That, in turn, means that big technology companies are growing more dependent on building successful artificial intelligence-based systems.

    “AI has become more central to the competitive landscape for these companies,” Horvitz said.

    In the long run, Horvitz sees vast potential for artificial intelligence to enhance people’s quality of life in areas including education, transportation and healthcare.

    Despite the recent breakthroughs in artificial intelligence research, many experts believe some of the biggest advances in artificial intelligence are years, if not decades, away. As these systems improve, Horvitz said researchers are creating safeguards to ensure that AI systems will perform safely even in unforeseen situations.

    “We have to stay vigilant, be proactive and make good decisions, especially as we build more powerful intelligences, including systems that might be able to outthink us or rethink things in ways that weren’t planned by the creators,” he said.

    Researchers, scientific societies and industry experts are building in tools, controls and constraints to prevent unexpected consequences.

    They also are constantly evaluating ethical and legal concerns

    Reply
  46. Tomi Engdahl says:

    How Bad User Interfaces Can Ruin Lives
    http://tech.slashdot.org/story/15/07/06/225217/how-bad-user-interfaces-can-ruin-lives

    The frustration of caregivers in these contexts was palpable. They’d teach an older user how to use a key service like Web-based mail to communicate with their loved ones, only to discover that a sudden UI change caused them to give up in frustration and not want to try again

    UI Fail: How Our User interfaces Help to Ruin Lives
    http://lauren.vortex.com/archive/001112.html

    A couple of months ago, in Seeking Anecdotes Regarding “Older” Persons’ Use of Web Services, I asked for stories and comments regarding experiences that older users have had with modern Web systems, with an emphasis on possible problems and frustrations.

    Any stereotypes about “older” users were quickly quashed.

    While some of the users had indeed never had much computer experience, a vast number of responses involved highly skilled, technologically-savvy individuals — often engineers themselves — who had helped build the information age but now felt themselves being left behind by Web designers who simply don’t seem to care about them at all.

    While issues of privacy and security were frequently mentioned in responses, as were matters relating to fundamental service capabilities, issues and problems relating to user interfaces themselves were by far the dominant theme.

    Some of these were obvious.

    There is enormous, widespread frustration with the trend toward low-contrast interfaces and fonts, gray fonts on gray backgrounds and all the rest. Pretty, but unreadable to many with aging eyes (and keep in mind, visual acuity usually begins to drop by the time we’ve started our 20s).

    Many respondents noted that screen magnifiers can’t help in such situations — they just end up with a big low-contrast blob rather than a small low-contrast blob.

    But then we really get into the deeper nitty-gritty of UI concerns. It’s a long and painful list.

    Hidden menus. Obscure interface elements (e.g., tiny upside-down arrows). Interface and menu elements that only appear if you’ve moused over a particular location on the display. Interface elements that are so small or ephemeral that they can be a challenge to click even if you still have the motor skills of youth. The list goes on and on.

    And beyond this, there is even more frustration with what’s viewed as undocumented and unnecessary changes in interfaces.

    For a user with fading memory (another attribute that begins to surface relatively early in life) the sudden change of an icon from a wrench to a gear, or a change in a commonly used icon’s position, can trigger such frustration that users who could most benefit from these systems — especially for basic communications — become embarrassed and, not wanting to ask for help, give up and withdraw back into deadly isolation.

    Reply
  47. Tomi Engdahl says:

    Software Devs Leaving Greece For Good, Finance Minister Resigns
    http://yro.slashdot.org/story/15/07/06/1645220/software-devs-leaving-greece-for-good-finance-minister-resigns

    New submitter TheHawke writes with this story from ZDNet about the exodus of software developers from Greece. “In the last three years, almost 80 percent of my friends, mostly developers, left Greece,” software developer Panagiotis Kefalidis told ZDNet. “When I left for North America, my mother was not happy, but… it is what it is.” It’s not just the software developers quitting either.

    Reply
  48. Tomi Engdahl says:

    Samsung stuffs 2 TERABYTES into a single flash drive
    Wham, bam, thank you, NAND
    http://www.theregister.co.uk/2015/07/07/samsung_2tb_hard_drive/

    Samsung has brought out a pair of mighty 2TB solid-state drive (SSD) internal drives aimed at the consumer market.

    The South Korean electronics giant said its 850 SSD PRO and EVO hard drives pack 2TB of capacity in a 2.5in hard-drive enclosure. Each drive contains 32 128Gb Samsung V-NAND chips as well as four 4GB DRAM chips.

    Samsung has long touted the 3D V-NAND chips as a more scalable and power-efficient alternative to previous flash storage methods.

    A 2TB capacity had been previously reported as possible, should users ask for one. Samsung said that, indeed, users had clamored for more storage space in the EVO and PRO SSD lines.

    “Samsung experienced surge in demand for 500 gigabyte (GB) and higher capacity SSDs with the introduction of our V-NAND SSDs,” said Samsung memory senior vice president of branded product marketing Un-Soo Kim.

    “The release of the 2TB SSD is a strong driver into the era of multi-terabyte SSD solutions.”

    The 2TB 850 PRO costs $999 and carries a 10-year, 300TB-written warranty

    Reply
  49. Tomi Engdahl says:

    Mozilla’s plans for Firefox: More partnerships, better add-ons, and faster updates
    http://venturebeat.com/2015/07/06/mozillas-plans-for-firefox-more-partnerships-better-add-ons-faster-updates-and-more/

    Mozilla is reexamining and revamping the way it builds, communicates, and decides features for its browser. In short, big changes are coming to Firefox.

    Mozilla doesn’t break out the exact numbers for Firefox, though the company does say “half a billion people around the world” use the browser. But Firefox has been bleeding market share for months, and something has to be done.

    Reply
  50. Tomi Engdahl says:

    Fujitsu 14 000 terabyte storage system

    The new Eternus DX8700 and DX8900 S3 S3 storage system can accommodate up to 14 petabytes (14 000 terabytes) of data, Fujitsu says bulletin. They can be treated with 4 million I / O transactions per second.

    Source: http://www.tivi.fi/Kaikki_uutiset/fujitsulta-14-000-teratavun-tallennusjarjestelma-3325305

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*