Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Turkish Ministry Recommends Banning Super-Violent Minecraft
    from the not-too-crafty dept
    https://www.techdirt.com/articles/20150310/11240330272/turkish-ministry-recommends-banning-super-violent-minecraft.shtml

    Insanely popular game Minecraft is known for a lot of things. It’s a fantastic creative outlet and the digital sandbox of youngsters’ dreams, for instance. The game has also been known to raise the ire of unrelated companies who somehow think all that creativity by gamers is something that can be sued over. It’s known for amazing user-generated content, including games within games and replicas of entire cities. The nation of Turkey is known for very different things. It’s a country that absolutely loves to censor stuff, for instance.

    Reply
  2. Tomi Engdahl says:

    Was Linus Torvalds Right About C++ Being So Wrong?
    http://developers.slashdot.org/story/15/03/10/2038228/was-linus-torvalds-right-about-c-being-so-wrong

    Perhaps the most famous rant against C++ came from none other than Linus Torvalds in 2007. “C++ is a horrible language,” he wrote, for starters. “It’s made more horrible by the fact that a lot of substandard programmers use it, to the point where it’s much much easier to generate total and utter crap with it.” He’s not alone: A lot of developers dislike how much C++ can do “behind the scenes” with STL and Boost, leading to potential instability and inefficiency. And yet there’s still demand for C++ out there.

    Over at Dice, Jeff Cogswell argues that C++ doesn’t deserve the hatred.

    Linus Torvalds Was (Sorta) Wrong About C++
    http://news.dice.com/2015/03/10/linus-torvalds-was-sorta-wrong-about-c/?CMPID=AF_SD_UP_JS_AV_OG_DNA_

    With all the new (and new-ish) languages out there, you might wonder why it’s still worth learning C++, a language first invented in 1983. Wedged amidst lower-level languages such as C, C++ went through a spike in popularity in the mid-‘90s, when developers realized it was easily translatable to assembly language; but today’s higher-level languages abstract the processor even further away.

    C++ has a lot in common with its parent, C; but C++ does a good bit more behind the scenes.

    But perhaps the most famous rant against C++ came from none other than Linus Torvalds. It features some choice bits:

    C++ is a horrible language. It’s made more horrible by the fact that a lot of substandard programmers use it, to the point where it’s much much easier to generate total and utter crap with it. Quite frankly, even if the choice of C were to do *nothing* but keep the C++ programmers out, that in itself would be a huge reason to use C.

    Torvalds had a problem with the library features of C++ such as STL and Boost, which he thinks are a.) unstable, and b.) inefficient, forcing developers to rewrite apps once they realize their code depends too much on the nice object models around it

    Tell us how you really feel, Linus.

    There are plenty of rebuttals to his attack, but I’ll just make two points. First, he obviously knows his stuff. Second, C does have its place… if you’re writing systems-level code that you want as tight and portable as possible. That latter concern aside, though, this is the 21st century: Why write dozens of lines of code when a single line of code (as with C++) will do it?

    A multitude of companies believe the benefits of C++ outweigh the drawbacks; although higher-level languages such as Python and C# have really taken off, there are still lots of C++ jobs; it’s not going away anytime soon.

    Along Came C++11

    Fifteen or so years ago, as languages such as JavaScript and Python became more popular, programmers began to embrace techniques and styles that, while not new, were easily available in those languages. Meanwhile, a group of C++ experts began putting together a library called Boost, which pushed the features of C++ to the limits, giving programmers access to the techniques commonly used in other, newer languages. Many of the creators of Boost were on the C++ Standards Committee, and some of the Boost features have found their way into the latest official incarnation of C++, called C++11.

    Keeping with our theme here, C++11 does even more “behind the scenes” than its predecessors. But that’s not a bad thing, depending on how you use it; for example, C++11 now offers lambda functions as an official part of the language.

    Along Came C++11

    Fifteen or so years ago, as languages such as JavaScript and Python became more popular, programmers began to embrace techniques and styles that, while not new, were easily available in those languages. Meanwhile, a group of C++ experts began putting together a library called Boost, which pushed the features of C++ to the limits, giving programmers access to the techniques commonly used in other, newer languages. Many of the creators of Boost were on the C++ Standards Committee, and some of the Boost features have found their way into the latest official incarnation of C++, called C++11.

    Keeping with our theme here, C++11 does even more “behind the scenes” than its predecessors. But that’s not a bad thing, depending on how you use it; for example, C++11 now offers lambda functions as an official part of the language.

    Reply
  3. Tomi Engdahl says:

    A gold MacBook with just ONE USB port? Apple, you’re DRUNK
    The ego has landed: (No) power (socket) to the people!
    http://www.theregister.co.uk/2015/03/10/new_apple_macbook_gold_one_usb_port_what_madness_is_this/

    Steve Jobs would confide that LSD was a formative influence on his life, one that distinguished him from his less-adventurous peers in the tech industry.

    Jobs is gone, but I wonder if someone left some Pounds Shillings and Pence lying around the Cupertino campus?

    You may not have noticed, but the world’s richest company* announced two things yesterday.

    Apple didn’t just announce a pointless and expensive watch. It also announced a pointless and expensive laptop. The new MacBook is a cripplingly underpowered machine with just one port. For everything. Including power.

    So what’s wrong with doing things for the sake of it?

    Apple did something similar, but less dramatic, with laptop design. It began to make them insanely thin. They were far more expensive than they needed to be, and more fragile than competitors’ machines. But Apple design again drove up quality in the market, and raised user expectations.

    Jobs was clearly proud of this

    Because we can

    Now compare this to the Watch and the Netbook – ooops, sorry, I meant the new MacBook.

    The new laptop continues to “push the envelope” for size and weight, but it comes with some very familiar compromises. The low power chip is a giveaway (and considering the Air and MacBook Pro offer outstanding performance for the same money).

    Removing all but a headphone jack and a single USB Type-C port is the real compromise. The Type-C replaces the MagSafe power socket – so you can’t plug in a USB mouse and charge the computer at the same time. To do that, you’ll need a $79 adapter. You’ll need that to use your current accessories while charging.

    And if you use VGA and HDMI monitors you’ll need both VGA and HDMI flavours of adapter. You’ll also need adapters for things you didn’t realise you needed. There’s no SD card slot on the machine, so budget for one of those. And probably a hub.

    Did Apple really need to throw out a dedicated power socket? It’s pretty fundamental. It’s even more fundamental a couple of years down the line, when the battery holds a fraction of its original charge.

    Reply
  4. Tomi Engdahl says:

    The MintBox Mini is a silent, quad-core Linux Mint PC that fits in your pocket
    http://www.pcworld.com/article/2871328/the-mintbox-mini-is-a-silent-quad-core-linux-mint-pc-that-fits-in-your-pocket.html

    Linux users will no longer be left out of the miniature PC party with the MintBox Mini, a pocket-sized Linix Mint rig, due out next quarter.

    While devices like the Raspberry Pi have already enabled Linux in diminutive packages, the MintBox Mini should be a clear step up in performance. According to the Linux Mint blog, it’ll pack an AMD A4 6400T processor with Radeon R3 graphics, 4GB of RAM, and 64GB of solid state storage. That should be more than sufficient for Web browsing, word processing, and video playback. As the name suggests, it’ll include Linux Mint out the box.

    Between the SSD and the passively cooled processor, the MintBox Mini runs utterly silent.

    As for size, the MintBox Mini will measure 0.95 inches thick, with a volume of 0.22 liters. That makes it three times smaller than Intel’s NUC desktops, and five times smaller than the original Mintbox from 2012. It should be small enough to stuff in your pocket.

    It’ll cost $295, with a portion of the proceeds going to Linux Mint, and will include a five-year warranty.

    Reply
  5. Tomi Engdahl says:

    Dream job: Sysadmin/F1 pit crew member with Red Bull racing
    The job’s real, you get a personal trainer and unmissable one-second deadlines
    http://www.theregister.co.uk/2015/03/12/dream_job_sysadminf1_pit_crew_member/

    Reply
  6. Tomi Engdahl says:

    Intel Lowers Q1 Revenue Forecast By $900M To $12.8B Amid PC Sales Slump
    http://techcrunch.com/2015/03/12/intel-lowers-q1-revenue-forecast-by-900m-to-12-8b-amid-pc-sales-slump/

    The shift away from desktop PCs to smartphones and other smaller computing devices is having a big impact on a major player in the PC market. Chipmaker Intel today lowered its revenue expectations by $900 million for its Q1 earnings that will come out on April 14. It now says Q1 sales will be $12.8 billion, “plus or minus $300 million,” versus earlier guidance of $13.7 billion, “plus or minus $500 million.” It says the change was made because of weak demand for desktop PCs and economic conditions in specific markets like Europe.

    “The change in revenue outlook is a result of weaker than expected demand for business desktop PCs and lower than expected inventory levels across the PC supply chain,” the company writes. “The company believes the changes to demand and inventory patterns are caused by lower than expected Windows XP refresh in small and medium business and increasingly challenging macroeconomic and currency conditions, particularly in Europe.”

    Reply
  7. Tomi Engdahl says:

    Cisco, Microsoft target cloud vendors in expanded partnership
    http://www.zdnet.com/article/cisco-microsoft-target-cloud-vendors-in-expanded-partnership/

    Summary:The companies launched a new platform that combines the Microsoft Azure Pack with networking devices and servers from Cisco’s ACI.

    The combined technology platform, officially named the Cisco Cloud Architecture for the Microsoft Cloud, will enable other cloud providers to reduce costs and simplify operations, according to the two companies. It combines the Microsoft Azure Pack with networking devices and servers from Cisco’s Application Centric Infrastructure (Cisco ACI).

    The benefit of the combination, the companies said, is that it allows cloud vendors to deliver services like big data and disaster recovery at DevOps speed, while also shifting the focus away from systems integration.

    Reply
  8. Tomi Engdahl says:

    On-prem storage peeps. Come here. It’s time for real talk. About Google
    Wobbly web giant uses Veritas as another onramp
    http://www.theregister.co.uk/2015/03/13/veritas_jumps_on_google_nearline_train/

    Veritas, Symantec’s renamed and soon to be spun off storage biz, announced it’s supporting Google’s nearline cloud storage.

    Its Veritas NetBackup 7.7, now in beta with general availability planned for the summer, can back up data to Google Cloud Storage Nearline. Here’s Veritas supporting cloud disks as an archival target alternative to tape.

    It says its product, through its OST layer, can “monitor and manage all backup information, regardless of location – disk, tape or cloud.” It will manage when and how data is moved from on-premises disk to Goog’s cloud, saying “the hybrid cloud model is the new norm inside of the enterprise.”

    Let’s put forward an extreme view: look people, don’t you realise? Google has absolutely no interest in the hybrid cloud. It has no skin in that game. You do. One way or another, every enterprise data storage systems and service provider has skin in the hybrid cloud/enterprise data centre and end-point storage game. Google does not. It is public cloud through and through.

    Amazon is, too. Azure is Microsoft’s skin in the public cloud game. Redmond knew what it had to do to stay at the top table.

    Public cloud service providers (CSPs) are modern-day Hoovers and Dysons, sucking up data and applications, and leaving nothing in their place but empty spaces and trashed on-premises-dependent business dreams.

    Just as soon as high-speed data networks to the cloud get built and become affordable, and the CSPs adopt proper SLA religion, all on-premises data storage system businesses will look like dodos. They’re going to be as dead as on-premises power generation, as buggy whips, as any other abandoned tech when a new technology or business model overwhelmed it.

    The public cloud is the biggest threat facing on-premises data storage.

    Nah, nah, too extreme. Never gonna happen. Business doesn’t trust the cloud that much. Get real. The cloud is insecure, unreliable, costs too much in the long run. Talking out the back of your silly ignorant head. What do you know, mere scribe?

    Reply
  9. Tomi Engdahl says:

    Kyle Russell / TechCrunch:
    Algorithm marketplace Algorithmia exits private beta with over 800 algorithms available, charging developers per-use

    Algorithmia Launches With More Than 800 Algorithms On Its Marketplace
    http://techcrunch.com/2015/03/12/algorithmia-launches-with-more-than-800-algorithms-on-its-marketplace/

    Algorithmia, the startup that raised $2.4 million last August to connect academics building powerful algorithms and the app developers who could put them to use, just brought its marketplace out of private beta.

    More than 800 algorithms are available on the marketplace, providing the smarts needed to do various tasks in the fields of machine learning, audio and visual processing, and even computer vision.

    Algorithm developers can host their work on the site and charge a fee per-use to developers who integrate the algorithm into their own work. The platform encourages further additions to its library through a bounty system

    Reply
  10. Tomi Engdahl says:

    Promoters of “alternative” IoT networks warn that IoT-friendly versions of LTE standards like CAT0 or LTE-MTC won’t become available until 2016 or later. The CAT0 standard and LTE-MTC are waiting for 3GPP’s Release 12 and Release 13. That’s why the LoRa Alliance and Sigfox are seeing openings to push their proprietary networks in the IoT market.

    Reply
  11. Tomi Engdahl says:

    Business leaders say big data is having a ‘disruptive’ effect on their business
    http://www.cio.com/article/2895103/big-data/business-leaders-say-big-data-is-having-a-disruptive-effect-on-their-business.html

    A global study of business leaders shows that big data is having a “disruptive effect” on their organisations, including in the way they are having to organise their IT to exploit it.

    The Capgemini and EMC report surveyed over 1,000 C-suite and senior decision makers across 12 countries to understand the need and enterprise readiness for big data adoption. The report found that two-thirds (65 percent) of business leaders acknowledged they are at risk of becoming “uncompetitive” unless they embrace new data analytics solutions.

    The report also showed that 36 percent of organisations, due to the strategic importance of big data, have had to “circumvent IT teams” to carry out the necessary data analytics required to gain business insights. And over half (52 percent) reported that developing fast insights from data was “hampered” by “limitations” in the IT development process

    In the UK, nearly three quarters (72.5 percent) of UK business leaders said their organisations are either experiencing big data disruption or are anticipating it over the next three years. More than half (56 percent) of the UK organisations surveyed have already implemented big data technology or are in the process of doing so. And almost half (47 percent) say that there is, or will be, increasing competition in their industry from data-enabled startups.

    In addition, a majority (57 percent) of UK respondents see big data as capable of enhancing existing revenue streams by becoming a revenue driver in its own right, and an even higher proportion (61 percent) say that it can unlock entirely new revenue streams. And 41 percent say their companies are restructuring to exploit data opportunities, or have already done so, with a third introducing C-level roles relating to data.

    Reply
  12. Tomi Engdahl says:

    Dean Takahashi / VentureBeat:
    Osmo Masterpiece is an iPad app aimed at kids that uses the front-facing camera, a proprietary mirror, and AI to train people how to draw

    Osmo Masterpiece could turn every kid into an iPad artist
    http://venturebeat.com/2015/03/12/osmo-masterpiece-could-turn-every-kid-into-an-ipad-artist/

    Osmo is turning out to be wonderfully creative at making apps for children that redefine the meaning of games and play. After a big debut last year, today the Silicon Valley startup is proving that again with Osmo Masterpiece, an app that enables kids and adults to become digital artists and regain confidence in their ability to draw. If you’ve ever been afraid to show somebody something you’ve drawn, you know what I mean.

    At a time when standardized testing is taking over classrooms, it’s nice to see an invention that gives children the skills to express themselves creatively.

    With Osmo Masterpiece, the child can snap a picture of anything or anyone. Then you attach Osmo’s reflective mirror to the iPad and activate an app that taps into Osmo’s artificial intelligence technology. The app uses computer vision to analyze the scene and produce a rough sketch of the object you have photographed. It lays out the important lines that you could use to create a drawing of that image.

    Then the kid can set a piece of paper in front of the iPad and trace the lines that Osmo suggests on the image on the iPad screen. The mirror enables the iPad’s camera to capture the movement of the child’s writing instrument and translate it into the image so you can see lines being drawn on the screen. Those lines are guided by the child’s own hand movements. It’s a lot like line-by-line tracing, but instead of tracing something underneath a sheet of paper, the child writes on the paper and looks at the lines on the iPad screen.

    Reply
  13. Tomi Engdahl says:

    Google Code disables new project creation, will shut down on January 25, 2016
    http://venturebeat.com/2015/03/12/google-code-disables-new-project-creation-will-shut-down-on-january-25-2016/

    GitHub has officially won. Google has announced that Google Code project creation has been disabled today, with the ultimate plan to kill off the service next year.

    On August 24, 2015, the project hosting service will be set to read-only. This means you will still be able to checkout/view project source, issues, and wikis, but nobody will be able to make changes or new commits.

    On January 25, 2016, Google Code will be shut down. Google says you will be able to download tarballs of project source, issues, and wikis “throughout the rest of 2016.” After that, Google Code will be gone for good.

    Frankly, I’m not in the least surprised. Google engineering teams have been moving away from Google to GitHub for a long time now. New code, libraries, and documentation over the past year or so has often launched on GitHub exclusively, and older versions have been moved over without much explanation beyond “the developer community has increasingly used GitHub for hosting code.”

    Google Code launched in 2006 as an effort to give the open source community a hosting service to thrive on.

    Reply
  14. Tomi Engdahl says:

    Davey Alba / Wired:
    IDC revises projections for 2015 PC shipments downward, now expects a 5% drop this year —
    http://www.wired.com/2015/03/no-really-pc-dying-not-coming-back/

    Reply
  15. Tomi Engdahl says:

    No, Really, the PC Is Dying and It’s Not Coming Back
    http://www.wired.com/2015/03/no-really-pc-dying-not-coming-back/

    A little while back, we started to hear a few voices speaking up against the drumbeat of doom. No, PCs weren’t dead. They were maybe even coming back.

    Well, they’re not.

    Market research outfit IDC has revised its prediction of PC shipments in 2015 downward. It’s projecting a drop of nearly 5 percent this year, worse than its earlier forecast of a 3.3 percent decline. In all, IDC expects 293.1 million PC units to ship this year.

    To put that figure in perspective, Apple sold more than 74 million iPhones during the last quarter alone. At an annualized rate, that would put iPhone sales alone above IDC’s prediction for the entire PC market.

    And not just in terms of the number of devices moving. The PC industry is also losing money. According to IDC, the PC market shrank 0.8 percent last year to $201 billion. This year, it expects that number to balloon to 6.9 percent. By 2019, the firm expects the overall market to shrink to $175 billion, or several billion less than Apple’s 2014 revenue ($183 billion).

    None of this should come as a surprise in an age where mobile devices have become the dominant computing platform. But this latest prognosis is still worth noting for the way it signals the much larger tectonic shift currently underway in the industry

    Yesterday, Intel—the primary supplier of microprocessors for PCs—reduced its revenue outlook in the first quarter by almost a billion dollars. The company acknowledged it is seeing weakening demand from businesses for desktop computers and lower inventory levels across the industry supply chain.

    Case in point: earlier this year it emerged that Google’s capital expenditures had hit $11 billion, exceeding the $10 billion spent by Intel. Since Intel’s capital spending has traditionally served as a kind of tech industry high-water mark, Google’s ascendance was a big deal. Intel has typically spent its money on property, manufacturing plants, and chip-building equipment, while Google spent its cash on the data centers, computer servers, and networking equipment that underlies its online empire.

    Reply
  16. Tomi Engdahl says:

    Intel 5th Gen vPro Goes 60GHz Wireless
    vPro Wireless Display (WiDi)/wireless docking
    http://www.eetimes.com/document.asp?doc_id=1325466&

    Intel’s new Pro Wireless Display (WiDi) and 60-GHz Wireless Docking technology enables tablets, laptops, detachable 2 in 1s and any other x86-based form factor to cut all cords. Keyboards, large-screen displays, printers, network connections, mice, USB accessories and everything else that once required a wire is now obsolete when using Intel’s 5th gen vPro core processors, according to Intel’s Tom Garrison, vice president and general manager of the Business Client Platform Division within the PC Client Group. HP and Fujitsu already have models available for sale, 12 worldwide OEMs have committed to 5th gen vPro and the top six OEMs will be showing demonstrations of their offerings today in New York City and London.

    Reply
  17. Tomi Engdahl says:

    PC Drop & iPhone Rumors Color Intel Forecast
    http://www.eetimes.com/document.asp?doc_id=1326025&

    The news about the weakening PC market is getting old. And yet, when Intel cut nearly $1 billion from its first-quarter revenue forecast on Thursday (March 12), the well-told narrative of declining demand for PCs reared its ugly head again.

    The warnings from Intel also raise the issue of Intel’s ability to diversify its business — fast enough to make up for the declining PC revenue. A recent report that Intel’s LTE modems “could get into iPhones” in 2016 stirred some noise, but it’s still a rumor at this point.

    Chicago-based rating agency Fitch Ratings also reported Thursday, “Intel’s negative pre-announcement this morning suggests the end of support for Windows XP was a more significant factor than previously estimated in last year’s perceived stabilizing personal computer (PC) demand.”

    The rating agency concluded: “Consequently, extended PC refresh cycles may result in a resumption of negative PC sales growth.”

    Indeed, Intel’s newly issued forecast is a shock to some who were led to believe in recent months that the PC business was growing after two years of weakness.

    Reply
  18. Tomi Engdahl says:

    Valve’s SteamVR: Solves Big Problems, Raises Bigger Questions
    http://games.slashdot.org/story/15/03/15/219245/valves-steamvr-solves-big-problems-raises-bigger-questions

    Valve’s astounding SteamVR solves big problems – and poses bigger questions
    Now that the VR dream is real, what do we do with it?
    http://www.eurogamer.net/articles/2015-03-15-valves-astounding-steamvr-solves-big-problems-and-poses-bigger-questions

    Reply
  19. Tomi Engdahl says:

    Silicon Valley Investor Warns of Bubble at SXSW
    http://bits.blogs.nytimes.com/2015/03/15/silicon-valley-investor-says-the-end-is-near/?_r=2

    Rarely has there been a better time for start-ups to raise money from Silicon Valley venture capitalists.

    But this golden age may not last long. At least that is what Bill Gurley says. Mr. Gurley is a prominent investor who has spent a career betting on who the winners and losers will be in the future of technology.

    His thesis: In today’s environment, venture firms are willing to take big risks on young technology companies with no real track record in business. Many of these companies, he says, have high burn rates, which means they regularly spend enormous amounts of money in hopes of bolstering their overall growth.

    But some of these companies will not succeed, and he says the fallout from those failures will have repercussions across the technology industry.

    “There is no fear in Silicon Valley right now,” Mr. Gurley said Sunday at the South by Southwest music and technology conference. “A complete absence of fear.”

    By some accounts, there are more than 50 of these billion-plus companies in the Valley at the moment, with more added seemingly every other week.

    “I do think you’ll see some dead unicorns this year,” Mr. Gurley said.

    Reply
  20. Tomi Engdahl says:

    EMC pulls on its DSSD boxing gloves and crooks a finger at Oracle
    Is DSSD set to be the new VMAX?
    http://www.theregister.co.uk/2015/03/16/dssd_is_the_new_vmax_emc_oracle_engineered_system/

    What we’re learning about EMC’s DSSD all-flash array project suggests it’s going to be the weapon of choice for the company in its war against Oracle’s Engineered Systems.

    DSSD could well be the new VMAX – a truly enterprise-class reliable all-flash array, but not one with XtremIO and VMAX-class data services.

    XtremIO is meant for enterprise, data-service rich, bullet-proof environments, while DSSD is for frighteningly fast, bullet-proof environments with poorer data services.

    EMC president Chad Sakac describes VMAX and XtremIO as exemplifying tightly-couple scale-out architectures. These feature multi-controller grids, shared memory, transactional commits and consistent and linear performance. DSSD is in the same area, we understand.

    We understand that DSSD could store data as objects in a flat name space, needing no central filesystem-like index; the object name is its address. Such a system could use parallelism inside flash chips for faster overall access.

    A ”3D RAID [scheme] could eliminate the encoding overhead inherent in advanced erasure codes while providing similar robustness, enabling way-beyond-RAID6 availability.”

    DSSD is to be launched later this year, and EMC World in Las Vegas, May 4-7, could be a DSSD orgy.

    DSSD could have one of the fastest-ever IT product ramps to a billion dollar run rate in history.

    Against this background HP’s Machine development starts looking to be a very good idea.

    Reply
  21. Tomi Engdahl says:

    PernixData chap: We are to storage as Alfred Nobel was to dynamite
    Snazzy memory tech demolishes DRAM volatility restrictions, says chief techie
    http://www.theregister.co.uk/2015/03/10/dftm_dynamite_for_storage_access_acceleration/

    PernixData chief technologist Frank Denneman thinks distributed fault-tolerant memory technology (DFTM) is ushering in an era of nanosecond-scale storage access latencies that could fundamentally change applications and the way they access data.

    Application process run speeds could be reduced to a tenth of their present levels or beyond and servers could handle many more virtual machines.

    Frank Denneman: In the enterprise, RAM’s inability to retain its contents in the event of power loss has precluded its use for primary data storage, despite its high-performance characteristics.

    But if it could be harnessed, if the data loss issue could go away, if its volatility could be subjugated in the same way that Alfred Nobel made nitroglycerine’s volatility controllable with his invention of dynamite, then applications could avoid paying the storage access wait tax and run so much faster.

    Look, massive memories are bringing storage changes upon us, right now.

    The current generation of Intel Xeon processors are able to support up to 1.5TB of memory each. In true “virtuous cycle” fashion, VMware recently announced support for up to 12TB of RAM per [8-core] host in its flagship product, vSphere 6, to take full advantage.

    Frank Denneman: Yes, agreed. Some vendors, like SAP, embraced the large-memory trend early on and did the heavy lifting for their user base. Did they solve the problem of the volatile nature of memory? Unfortunately not. For example, although SAP HANA is an in-memory database platform, logs have to be written outside the volatile memory structure to provide ACID (Atomicity, Consistency, Isolation, Durability) guarantees for the database transactions to be processed reliably.

    In fact, SAP recommends using local storage resources, such as flash, to provide sufficient performance and data protection for these operations. Virtualising such a platform becomes a challenge

    El Reg: But application data still has to be written to persistent storage. How does that work with DFTM?

    Frank Denneman: This is true, but the key is that this operation is completely transparent to the application. Once the data is written to the local memory and the remote memory of another host in the cluster, the write operation is completed.

    This architecture removes as many moving parts as possible. It writes to the local memory and simultaneously writes to remote memory across the Ethernet network to complete the synchronous replication.

    Reply
  22. Tomi Engdahl says:

    Hello?! Converged data centre peeps? Dell likes US too, says VMware
    New list of VSAN Ready Nodes emerges with some oldies, some n00bs and HP
    http://www.theregister.co.uk/2014/06/25/hello_converged_data_centre_peeps_dell_likes_us_too_says_vmware/

    Reply
  23. Tomi Engdahl says:

    Merkel urges closer tech ties with China
    http://www.zdnet.com/article/merkel-urges-closer-tech-ties-with-china/

    Summary:Angela Merkel has used the CeBIT event as a platform to advocate closer tech cooperation between China and Germany.

    German Chancellor Angela Merkel has urged closer high-tech cooperation with China as she opened major IT business fair CeBIT, for which China is the official partner country.

    “German business values China, not just as our most important trade partner outside of Europe, but also as a partner in developing sophisticated technologies,” Merkel said.

    “Especially in the digital economy, German and Chinese companies have core strengths … and that’s why cooperation is a natural choice.”

    Merkel was speaking at the opening of the CeBIT fair in Hanover, Germany, where more than 600 Chinese companies will exhibit their tech marvels this week, showcasing the country’s rise as an IT power.

    China’s information and communications technology has bucked the country’s wider slowdown in economic growth, and is booming in what is now the world’s biggest smartphone market with the highest number of internet users.

    Reply
  24. Tomi Engdahl says:

    BlackBerry partners with Samsung, IBM on ultra-secure tablet – but it will cost close to $2,400
    http://www.fiercewireless.com/story/blackberry-partners-samsung-ibm-ultra-secure-tablet-it-will-cost-close-2400/2015-03-16?utm_medium=rss&utm_source=rss&utm_campaign=rss

    BlackBerry (NASDAQ:BBRY) is teaming up with Samsung Electronics and IBM to offer a highly secure tablet for government and enterprise workers, but it will cost around $2,380.

    BlackBerry unit Secusmart is partnering with IBM to release the SecuTablet, a high-security tablet based on the Samsung Galaxy Tab S 10.5. The solution is undergoing certification at the German Federal Office for Information Security for the German VS-NfD (“classified–for official use only”) security rating.

    The tablet uses Secusmart’s encryption technology, which is being used by the German and Canadian governments, among others, to guard against eavesdropping. BlackBerry said the tablet can be seamlessly integrated into existing SecuSUITE security systems.

    Further, IBM has provided the secure “app wrapping” technology for the tablet to separate business applications from work ones. IBM will also help in implementing the high security solutions from Secusmart within government clients.

    Reply
  25. Tomi Engdahl says:

    Why Google wants to replace Gmail
    http://www.computerworld.com/article/2838775/why-google-wants-to-replace-gmail.html

    Gmail represents a dying class of products that, like Google Reader, puts control in the hands of users, not signal-harvesting algorithms.

    I’m predicting that Google will end Gmail within the next five years. The company hasn’t announced such a move — nor would it.

    But whether we like it or not, and whether even Google knows it or not, Gmail is doomed.
    What is email, actually?

    Email was created to serve as a “dumb pipe.” In mobile network parlance, a “dumb pipe” is when a carrier exists to simply transfer bits to and from the user, without the ability to add services and applications or serve as a “smart” gatekeeper between what the user sees and doesn’t see.

    Carriers resist becoming “dumb pipes” because there’s no money in it. A pipe is a faceless commodity, valued only by reliability and speed. In such a market, margins sink to zero or below zero, and it becomes a horrible business to be in.

    “Dumb pipes” are exactly what users want. They want the carriers to provide fast, reliable, cheap mobile data connectivity. Then, they want to get their apps, services and social products from, you know, the Internet.

    Email is the “dumb pipe” version of communication technology, which is why it remains popular. The idea behind email is that it’s an unmediated communications medium. You send a message to someone. They get the message.

    When people send you messages, they stack up in your in-box in reverse-chronological order, with the most recent ones on top.

    Compare this with, say, Facebook, where you post a status update to your friends, and some tiny minority of them get it.

    Why email is a problem for Google

    You’ll notice that Google has made repeated attempts to replace “dumb pipe” Gmail with something smarter. They tried Google Wave. That didn’t work out.

    They hoped people would use Google+ as a replacement for email. That didn’t work, either.

    They added prioritization. Then they added tabs, separating important messages from less important ones via separate containers labeled by default “Primary,” “Promotions,” “Social Messages,” “Updates” and “Forums.” That was vaguely popular with some users and ignored by others.

    This week, Google introduced an invitation-only service called Inbox. Another attempt by the company to mediate your dumb email pipe, Inbox is an alternative interface to your Gmail account, rather than something that requires starting over with a new account.

    Instead of tabs, Inbox groups together and labels and color-codes messages according to categories.

    But the bottom line is that dumb-pipe email is unmediated, and therefore it’s a business that Google wants to get out of as soon as it can.

    Say goodbye to the unmediated world of RSS, email and manual Web surfing. It was nice while it lasted. But there’s just no money in it.

    Reply
  26. Tomi Engdahl says:

    You Don’t Need to Start as a Teen to be an Ethical Hacker (Video)
    http://it.slashdot.org/story/15/03/16/1911252/you-dont-need-to-start-as-a-teen-to-be-an-ethical-hacker-video

    Justin is 40, an age where a lot of people in the IT game worry about being over the hill and unemployable. But Justin’s little video talk should give you hope — whether you’re a mature college student, have a stalled IT career or are thinking about a career change but want to keep working with computers and IT in general. It seems that there are decent IT-related jobs out there even if you’re not a youngster; and even if you didn’t start working with computers until you were in your 20s or 30s.

    Reply
  27. Tomi Engdahl says:

    HSA Foundation Launches ‘HSA 1.0 Final’ – Architecture, Programmers Reference and Runtime Specifications
    by Ian Cutress on March 16, 2015 7:30 PM EST
    http://www.anandtech.com/show/9066/hsa-foundation-launches-hsa-10-final-architecture-programmers-reference-and-runtime-specifications

    Heterogeneous compute has been on the tip of the tongue for many involved in integrating compute platforms, and despite talk and demos regarding hardware conforming to HSA provisional specifications, today the HSA Foundation is officially launching the ratified 1.0 Final version of the standardized platform design. This brings together a collection of elements:

    - the HSA Specification 1.0 defines the operation of the hardware,
    - the HSA Programmers’ Reference Manual for the software ecosystem targeting tools and compiler developers,
    - the HSA Runtime Specification for how software should interact with HSA-capable hardware

    The specifications are designed to be hardware agnostic as you would imagine, allowing ARM, x86, MIPS and other ISAs to take advantage of the HSA standard as long as it conforms to the specifications. The HSA Foundation is currently working on conformance tests, with the first set of testing tools to be ready within a few months. The goal is to have the HSA specification enabled on a variety of common programming languages, such as C/C++, OpenMP, Python as well as C/Fortran wrappers for the HPC space. Companies such as MultiCoreWare are currently helping develop some of these compilers for AMD, for example.

    Reply
  28. Tomi Engdahl says:

    X99 goes TUF: Sabertooth X99 at CeBIT 2015 with NVMe Support
    by Ian Cutress on March 16, 2015 7:15 PM EST
    http://www.anandtech.com/show/9086/x99-goes-tuf-sabertooth-x99-at-cebit-2015-with-nvme-support

    Of the four major motherboard manufacturers, three of them separate their main lines into channel (regular), overclocking and gaming, with one other also having a low power range. ASUS does it a little differently by having the Republic of Gamers line as a combination gaming/overclocking product stack and then the TUF (The Ultimate Force) range which acts as a longer warranty and an engineered product designed for longevity. The newest member of this line is to be the X99 Sabertooth, which ASUS is showcasing at CeBIT.

    Reply
  29. Tomi Engdahl says:

    Honey, I shrunk the Windows footprint
    Microsoft explains how it will fit Windows 10 into cheaposlabs, maybe without bloatware
    http://www.theregister.co.uk/2015/03/17/honey_i_shrunk_the_windows_footprint/

    One of the challenges Microsoft has given itself with its goal of making Windows 10 run on almost anything is that lots of devices have rather less storage than PCs, which means Redmond can’t assume users will have hard disk space to burn.

    If Windows takes up most of a cheaposlab’s 32GB solid state disk, that’s not going to make for happy punters. Even in developing nations, where people are just as keen on photos of the kids as people anywhere else but lack the bandwidth to dump them into OneDrive.

    Redmond’s therefore finding ways to squeeze Windows 10′s footprint and has detailed them in a new post.

    The big space-saving change Microsoft’s made is to remove the need for a recovery image to be stored on PCs and fondleslabs. Microsoft says that image currently consumes between 4GB and 12GB, depending on how OEMs implement recovery features.

    How Windows 10 achieves its compact footprint
    http://blogs.windows.com/bloggingwindows/2015/03/16/how-windows-10-achieves-its-compact-footprint/

    Compactness via system compression and recovery enhancements

    Windows 10 employs two separate and independent approaches for achieving a compact footprint. First, Windows 10 leverages an efficient compression algorithm to compress system files. Second, recovery enhancements have removed the requirement for a separate recovery image.

    With current builds, Windows can efficiently compress system files. That gives back approximately 1.5GB of storage for 32-bit and 2.6GB of storage for 64-bit Windows. Phones will also be able to use this same efficient compression algorithm and likewise have capacity savings with Windows 10.

    With these new and enhanced functionalities, devices running Windows 10 will have more free space for photos, videos, and music, as the figure below illustrates.

    Reply
  30. Tomi Engdahl says:

    Big Data shocker: Over 6 million Americans have reached the age of 112
    Just 13 are claiming benefits, and 67,000 of them are WORKING
    http://www.theregister.co.uk/2015/03/17/big_data_reveals_that_65m_americans_have_reached_age_112/

    In an illustration of what can happen when you use Big Data uncritically, it has emerged that no less than 6.5 million living Americans have reached the ripe old age of 112. Even more amazingly, it appears that just 13 of the super-silver legions are claiming benefits – and tens of thousands of them appear to be holding down jobs at least part-time.

    Were they being taken seriously, the Social Security Administration’s records would be shattering assumptions regarding the numbers of supercentenarians alive in the world today.

    Only 13 of the 6.5 million are actually claiming Social Security benefits, it seems, but the other numbers have not been formally deleted and thus create an opportunity for fraudsters to give false details when providing their financial information.

    The Associated Press quotes Senator Ron Johnson as saying “This is a real problem.”

    Reply
  31. Tomi Engdahl says:

    Nutanix to release ‘community version’ of its secret software sauce
    Stop describing us with the ‘C-word’ says CEO Dheeraj Pandey
    http://www.theregister.co.uk/2015/02/19/nutanix_to_release_community_version_of_its_secret_software_sauce/

    Nutanix is months away from releasing a free, “community” version of the secret software sauce that turns its collections of storage and servers into hyperconverged spin up VMs almost before you know it beasts.

    CEO Dheeraj Pandey yesterday told The Register development work on the version is under way and will be released in coming months [we've since learned it's in Alpha, with some civilians offered access – ed].

    Pandey added that no date has been chosen for its release. Nor has it been decided what hardware will be required, or if any limits will be imposed on users. He said he’s inclined to make the release scale to considerable heights, as if users are willing to make do with peer support only they may not ever become paying customers.

    Nutanix’s secret-sauce software takes clusters of server nodes filled with processors, memory and drives, and turns them into pools of storage. Virtual machines, running in hypervisors on the servers, then access the pool as a whole.

    The community edition is part of Nutanix’s ongoing effort to de-emphasise its hardware.

    Reply
  32. Tomi Engdahl says:

    Jordan Pearson / Motherboard:
    ‘Sirius’ Is the Google-Backed Open Source Siri
    http://motherboard.vice.com/read/sirius-is-the-google-backed-open-source-siri

    Sirius, a Google-funded open source program similar to Apple’s Siri or Google’s Google Now voice recognition application, could finally democratize the virtual assistant.

    RIght now, virtual assistants are a game for the big kids of the tech world—Apple, Microsoft, Amazon, and Google itself all have their own versions. Sirius, developed by researchers at the University of Michigan’s Clarity Lab, aims to do what those programs can with an open source twist.

    Other backers include the Defense Advanced Research Projects Agency, the US military’s research wing, and the National Science Foundation.

    The idea is that anybody can contribute to the program on GitHub, a site for coders to collaborate. It’s also being released under a BSD license, documents on the project’s GitHub indicate, meaning that it will be completely free for anyone to use or distribute. Researchers will be able to use it to explore the possibilities of virtual assistants, according to a university statement, and eventually, anybody can put it on their own homebrew device.

    Right now, it’s only been tested on Ubuntu desktops, but it could one day make it onto phones and other devices. Jason Mars, the researcher that headed up the project, describes Sirius as a Linux-like version of Siri.

    Sirius already has capabilities lacking from its corporate counterparts. For example, you can take a picture, feed it to Sirius, and ask a question about it. Siri can’t do that. But, unlike Siri, Sirius isn’t exactly elegant; it’s a patchwork of other open source projects that, when stitched together, give Sirius its capabilities.

    Reply
  33. Tomi Engdahl says:

    SoC Spec Hits Version 1.0
    AMD-led HSA spec awaits adopters
    http://www.eetimes.com/document.asp?doc_id=1326040&

    The Heterogeneous Systems Architecture Foundation has finished its 1.0 spec, turning attention to what SoCs will support it. So far, of the group’s seven board members only Advanced Micro Devices has announced chips using HSA’s specs.

    The HSA specs are geared to let graphics cores share coherent memory as equal citizens with CPU cores, enabling performance gains across a wide range of applications. The approach also lets developers program SoCs using it in familiar high-level languages such as C++ and Python. AMD which led the formation of the group said its latest x86 SoC, Carrizo, will get certified as compliant with the spec when tests are ready later this year.

    Other leading HSA board members include Imagination Technologies, LG, Mediatek, Qualcomm and Samsung. They have not yet commented on product plans but their work on the spec suggests they have them, said Phil Rogers, president of HSA and a corporate fellow at AMD.

    Reply
  34. Tomi Engdahl says:

    Torvalds: Linux core coders are hired quickly

    88 per cent of the linux-kernel code which produced last year was one of the programmers on the payroll. Linus Torvalds is very Séléka reason why the volunteers, recruited into the heart of developing the code number is reduced all the time.

    Tosvald says that that this is not the fact that volunteers are disappearing open source projects. – More about is the fact that the kernel coding engaged by the companies pretty quickly, Torvalds said in an interview with Network World.

    - Maybe we started on a voluntary basis, but today we will be happy to pay linux from coding, Torvalds continues.

    Since 2005, the Linux kernel development has participated in a total of 11695 coders by more than 1200 companies.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2557:torvalds-ytimeen-koodaajat-palkataan-nopeasti&catid=13&Itemid=101

    Reply
  35. Tomi Engdahl says:

    Analysis: People Who Use Firefox Or Chrome Make Better Employees
    http://news.slashdot.org/story/15/03/17/0215235/analysis-people-who-use-firefox-or-chrome-make-better-employees

    In the world of Big Data, everything means something. Now Joe Pinsker reports that Cornerstone OnDemand, a company that sells software that helps employers recruit and retain workers, has found after analyzing data on about 50,000 people who took its 45-minute online job assessment, that people who took the test on a non-default browser, such as Firefox or Chrome, ended up staying at their jobs about 15 percent longer than those who stuck with Safari or Internet Explorer.

    “I think that the fact that you took the time to install Firefox on your computer shows us something about you. It shows that you’re someone who is an informed consumer,”

    Comment:
    If an industry has a 45% turnover rate, as is cited for call centers, the problem is not the “talent and dedication” of the employees. The problem is that the job is structured in such a way that it is mind numbing, repetitive, and unsatisfying to the workers. And BTW, if you really want workers who can perform under such conditions, you are NOT looking for someone who wants control over their circumstances as indicated by the selection of a non-default browser.

    However there is a skill, in finding empowerment even in mind numbing jobs. Installing Chrome or Firefox, is usually one way, as it is one of those things that are normally “officially” against the rules, but you do it anyways, because you know your browsing experience will be a little bit safer. Knowing when to bend/break the rules, and when follow them is an important skill.
    I see too many people who just suffer their job and their performance hinders, because they so cautious on following the rules, that they cannot break out of the hum-drum activity. I also see people get fired for just going too gung ho and broke the rules just because they didn’t like them.

    Reply
  36. Tomi Engdahl says:

    Convenience trumps ‘open’ in clouds and data centers
    Sorry OpenStack and Open Compute, we’re not all Facebook
    http://www.theregister.co.uk/2015/03/17/openstack_open_compute_vs_proprietary/

    Call it OpenStack. Call it Open Compute. Call it OpenAnything-you-want, but the reality is that the dominant cloud today is Amazon Web Services, with Microsoft Azure an increasingly potent runner-up.

    Both decidedly closed.

    Not that cloud-hungry companies care. While OpenStack parades a banner of “no lock in!” and Open Compute lets enterprises roll-their-own data centres, what enterprises really want is convenience, and public clouds offer that in spades. That’s driving Amazon Web Services to a reported $50bn valuation, and calling into question private cloud efforts.

    For those enterprises looking to go cloud – but not too cloudy – OpenStack feels like a safe bet. It has a vibrant and growing community, lots of media hype, and brand names like HP and Red Hat backing it with considerable engineering resources.

    No wonder it’s regularly voted the top open-source cloud.

    The problem, however, is that “open” isn’t necessarily what people want from a cloud.

    While there are indications that OpenStack is catching on (see this Red Hat-sponsored report from IDG), there are far clearer signs that OpenStack remains a mass of conflicting community-sponsored sub-projects that make the community darling far too complex.

    As one would-be OpenStack user, David Laube, head of infrastructure at Packet, describes:

    Over the course of a month, what became obvious was that a huge amount of the documentation I was consuming was either outdated or fully inaccurate.

    This forced me to sift through an ever greater library of documents, wiki articles, irc logs and commit messages to find the ‘source of truth’.

    After the basics, I needed significant python debug time just to prove various conflicting assertions of feature capability, for example ‘should X work?’. It was slow going.

    While Laube remains committed to OpenStack, he still laments that “the amount of resources it was taking to understand and keep pace with each project was daunting”.

    Open Compute may not compute

    Nor is life much better over in Open Compute Land. While the Facebook project (which aims to open source Facebook’s datacentre designs) has the promise to create a world filled with hyper-efficient data centres, the reality is that most enterprises simply aren’t in a position to follow Facebook’s lead.

    Back in 2012, Bechtel IT exec Christian Reilly lambasted Open Compute, declaring that: “Look how many enterprises have jumped on Open Compute. Oh, yes, none. That would be correct.”

    While that’s not true – companies such as Bank of America, Goldman Sachs, and Fidelity have climbed aboard the Open Compute bandwagon – it’s still the case that few companies are in a position to capitalize on Facebook’s open designs.

    This may change, of course. Companies such as HP are piling into the Open Compute community to make it easier, with HP building a new server line based on Open Compute designs, as but one example.

    The new and the old

    One of the biggest problems with the private cloud is the nature of the workloads enterprises are tempted to run within it.

    As Bittman writes in separate research, while VMs running in private clouds have increased three-fold in the past few years, even as the overall number of VMs has tripled, the number of active VMs running in public clouds has expanded by a factor of 20.

    This means that: “Public cloud IaaS now accounts for about 20 per cent of all VMs – and there are now roughly six times more active VMs in the public cloud than in on-premises private clouds.”

    While a bit dated (2012), Forrester’s findings remain just as true today:

    Asking IT to set up a hyper-efficient Facebook-like data centre isn’t the “fastest way to get [things] done”. Ditto cobbling together a homegrown OpenStack solution. In fact, private cloud is rarely going to be the right way to move fast.

    Sure, there are other reasons, but the cloud that wins will be the cloud that is most convenient. Unless something drastic changes, that means public cloud will emerge triumphant.

    Reply
  37. Tomi Engdahl says:

    Nvidia tears wraps off ‘its fastest GPU’, the GeForce Titan X – again
    Teased at games dev conference and confirmed today
    http://www.theregister.co.uk/2015/03/17/nividia_keynote_musk/

    GTC 2015 Nvidia CEO Jen-Hsun Huang confirmed the arrival of the GeForce Titan X just a few minutes ago today, dubbing it Nvi’s fastest GPU to date.

    The Titan X, codenamed GM200, is a 28nm Maxwell GPU with 12GB of RAM, 1GHz clock speed, 336GB/s memory bandwidth, 8 billion transistors, 3,072 CUDA cores, 7TFLOPS (single precision) and 0.2TFLOPs (double precision) performance, and a PCIe 3.0 x16 interface.

    Nvidia is pitching the Titan X not just for games, but also for deep-learning processing, and has priced it at $999.

    Reply
  38. Tomi Engdahl says:

    Counterfeit SD Card Problem is Widespread
    http://www.eetimes.com/document.asp?doc_id=1326059&

    Individual consumers and corporate bulk buyers alike should be wary of great prices for secure digital (SD) memory cards: they will find out the hard way the cards are bogus.

    The Counterfeit Report recently published its findings about the extent of counterfeit SD cards available for purchase, particularly online from dishonest sellers using eBay, Amazon, and Alibaba offering high capacity cards at deep discounts. Publisher Craig Crosby said the cards and packaging, using common serial numbers, are nearly identical to the authentic product of all major SD card brands.

    Tests by the Counterfeit Report found that the cards will work at first, but generally speaking, buyers are purchasing what they think are cards with capacities of 32GB and up. Instead they are getting are cards with 7GB capacity. Counterfeiters simply overwrite the real memory capacity with a false capacity to match any capacity and model they print on the counterfeit packaging and card, Crosby explained. Users can’t determine the actual memory capacity of a counterfeit memory card by simply plugging it into their computer, phone, or camera. When the user hits the limit, the phony card starts overwriting files, which leads to lost data.

    SanDisk: SDHC Micro SD Memory Cards
    http://www.thecounterfeitreport.com/product/186/SDHC-Micro-SD-Memory-Cards.html

    According to reports from Sandisk, one third of all memory cards on the market are counterfeit.

    Many buyers of counterfeit products find that branded SanDisk cards are actually re-branded inferior quality cards or cards of smaller capacity.

    All genuine SanDisk memory cards should have a serial number and a manufacturing country’s identity. You can not determine the authenticity of a Sandisk card from an internet stock photo.

    You cannot determine the actual memory capacity of a counterfeit memory card by simply viewing the capacity displayed by your computer, phone or camera. Counterfeiters fraudulently overwrite the cards internal memory with a false capacity. It must be checked with a test program.

    To check if your Micro SD drive has the capacity and speed stated, “H2testw” is a simple tool that is distributed for free, does not require installation and offers a very simple, easy-to-use interface. The program can be used by anyone who wants to know how their actual product compares to others, the real capacity and the amount of errors that can be detected on their device.

    Reply
  39. Tomi Engdahl says:

    H2testw 1.4
    Get detailed info on how well various storage devices perform at reading and writing data with the help of this lightweight application
    http://www.softpedia.com/get/System/System-Miscellaneous/H2testw.shtml

    Reply
  40. Tomi Engdahl says:

    Marvell Touts MoChi, FLC in Shanghai
    http://www.eetimes.com/document.asp?doc_id=1326063&

    Weili Dai, president and co-founder at Marvell Technology Group (Santa Clara, Calif.), was in Shanghai Wednesday, March 18 to pitch Marvell’s recently disclosed less memory-intensive computing system architecture – scalable from wearable to data centers – in a keynote speech at the Global CEO Summit.

    Dai painted in broad strokes Marvell’s Final-Level Cache (FLC) memories and a new interconnect technology called MoChi (modular chip). She described them as basic building blocks for the company’s scalable SoCs.

    Marvell’s CEO Sehat Sutardja first presented the idea on FLC and MoChi at the international Solid-State Circuits Conference (ISSCC) last month. The approach, he said, can substantially reduce the amount of DRAM main memory needed in a system.

    Now, taking that revolutionary concept developed by her husband Sutardja, Dai is running with it in China.

    She said in her keynote that FLC is “redefining the main memory hierarchy,” while MoChi is designed to “build chips in Lego-like format.”

    She explained that the FLC-MoChi approach will significantly reduce the “cost, power and size” of electronics systems – whether PC, server, smartphone, or wearable.

    Marvell plans to launch prototype chips — based on the MoChi interconnect and FLC memories — at the end of this year.

    Reply
  41. Tomi Engdahl says:

    Bing Distill is Microsoft’s version of Yahoo Answers, website is live and you can request an invite
    http://www.winbeta.org/news/bing-distill-microsofts-version-yahoo-answers-website-live-and-you-can-request-invite

    At the beginning of this year, we reported that Microsoft was working on curating human answers in a new service called Bing Distill. The word distill, for those curious, means “to extract the essential meaning or most important aspects of.” The service is simple — use your know-how to answer questions people are asking on Bing. Help the community create the best answers when you give feedback. And you can edit your answers and those created by other community members.

    For longtime Microsoft followers, you may recall that Microsoft has tried something similar before, in what became “Windows Live QnA.”

    “Millions ask, you answer. Join Bing Distill and be part of the community that answers the questions everyone is searching for,”

    Reply
  42. Tomi Engdahl says:

    Suits vs ponytailed hipsters: What’s next for enterprise IT
    Enrico tells you what you won’t see in 2015
    http://www.theregister.co.uk/2015/01/06/enterprise_it_predictions/

    After all the predictions I read about 2015 (and many of them are pretty ridiculous) I can’t help but make my own, which I will approach in a totally different manner, using common sense first and trying to make an “anti-prediction” for enterprise IT in 2015.

    100 per cent flash

    Flash dominance? It won’t happen any time soon. Many (most?) tier one workloads will be moved to flash, of course, but data is adding up so quickly that it’s highly unlikely you will be seeing a 100 per cent data centre any time soon.

    It will take a few years to have about 10-20 per cent of data stored on flash and the rest will remain on huge hard disks; cheap 10+TB hard disks will soon be broadly available, for example

    Object storage

    You won’t see a lot of object storage either – a hard thing for a fanboi like myself to say! It’ll happen eventually, but most of the vendors will stay in their niche until they can work out a better and more solution-based strategy.

    Open Stack – ties vs ponytails

    I’m sorry to say but I just don’t see it happening, and there are a number of reasons why which, in all cases lead to the same conclusion: complexity and immaturity. Even though the number of ties at the OpenStack conferences is growing very fast, when compared to ponytails, we haven’t reached the right balance yet.

    The private cloud, seen as a new form of computing and not as an extension of your traditional virtual infrastructure (referring to VMware and Microsoft stacks), is only for few, and very big, end users. Its complexity is just too high for most end users.

    Last but not least, hybrid cloud is also getting some attention, but it’s hard to do with Open Stack. Making workloads move back and forth as well as controlling them on different Open Stack clouds (which also means different releases with different APIs and features) isn’t easy, is it?

    The end of cloud wars

    Microsoft is quickly becoming a cloud company (with a lot of openness – unusual in Microsoft’s history), Google is playing more seriously and Amazon is trying to keep its unquestionable leading position. It will be very interesting to see what happens, especially now that new interesting players (like Digital Ocean, for example) are coming up.

    Containers (Docker or whatever)

    If 2014 was the year of hype, 2015 will be the year of maturity and 2016 will be all about adoption. Long story short, this won’t be the year for containers but most of the needed backend and enabling tools will be maturing this year.

    I wouldn’t be surprised to see a more successful story with containers (and some specific cloud management tool for them) than with Open Stack in general… it won’t happen in 2015 for sure

    I know, it’s probably easier to predict what will happen in the not too near future (and nobody will remember this post in two years)

    Reply
  43. Tomi Engdahl says:

    Java will be a much more scalable and secure platform, when the ambient version 9 sees the light of day next year. Java’s chief architect Mark Reinhold confirmed Oracle plans last week EclipseCon event.

    According to Reinhold, Java 9 has been made for four development lines that pass through the JEP packages (JDK enhancement proposal) name. JEP 200 defines Java’s modular structure, the JEP 201 source code modularity and JEP 220 of the how modularity is realized while running. The fourth reform package is JSR 376 and it defines in detail Java’s modular system.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2571:javasta-tulee-ensi-vuonna-modulaarinen&catid=13&Itemid=101

    Reply
  44. Tomi Engdahl says:

    CIO, show the ability of digital business ruler

    Business digitalisation cause the bustle of the management groups. Companies grassroots appointed as the new c-level digital and data managers. Many of the problems could be solved if the good old IT boss seems claws technology expert.

    Businesses, employees, customers and other stakeholders, the general digitalization has also increased the Management Board members between the power struggles. CIO and marketing managers of mutual Wrestling is the best-known example.

    Management teams are filling up the CDO’s (chief digital officer / chief data officer), CAO (chief analytics officer), CMT (chief marketing technologist) and even the CEO’s (chief experience officer) such as players.

    Euphemistically put these bosses is to act as intermediaries in digitalized customer base and business between different departments.

    However, Cio.com asks whether the management teams to changes in increasingly large egos playground of the strategy make sense?

    Experts disagree on the need for new leaders.

    Digitalization progresses, companies are looking frantically game builders who are able to help the CIO ia technology challenges.

    Source: http://www.tivi.fi/CIO/2015-03-18/CIO-n%C3%A4yt%C3%A4-kykysi-digibisneksen-hallitsijana-3217651.html

    Tech hotshots: The rise of the chief data officer
    http://www.computerworld.com/article/2895077/tech-hotshots-the-rise-of-the-chief-data-officer.html

    As CIOs become overwhelmed by IT demands, chief data officers are stepping in to serve as a centralized point of data governance.

    Wes Hunt is unambiguous about his objective as the first chief data officer at Nationwide Mutual Insurance: “Our goal,” he says, “is to treat data as the asset it is and drive value from that.”

    “Historically, the work of determining how data was gathered, stored, managed and destroyed was distributed among legal, HR, the CIO and business functions. Now emerges the chief data officer, who is the authority on all things data management and data governance,” says Dorman Bazzell, the practice leader for Emerging Technologies and Advanced Solutions at Capgemini.

    The monumental task of trying to manage data in such a disparate environment means that enterprise leaders often can’t deliver on their data’s full value proposition, analysts say. To counter that situation, leading companies are creating the position of CDO.

    Reply
  45. Tomi Engdahl says:

    Reuters:
    Microsoft to offer free upgrade to Windows 10 even for non-genuine Windows PCs, to tackle piracy

    Microsoft tackles China piracy with free upgrade to Windows 10
    http://www.reuters.com/article/2015/03/18/us-microsoft-china-idUSKBN0ME06A20150318

    (Reuters) – Microsoft Corp is making its biggest push into the heavily pirated Chinese consumer computing market this summer by offering free upgrades to Windows 10 to all Windows users, regardless of whether they are running genuine copies of the software.

    The move is an unprecedented attempt by Microsoft to get legitimate versions of its software onto machines of the hundreds of millions of Windows users in China. Recent studies show that three-quarters of all PC software is not properly licensed there.

    “We are upgrading all qualified PCs, genuine and non-genuine, to Windows 10,”

    Reply
  46. Tomi Engdahl says:

    A day may come when flash memory is USELESS. But today is not that day
    However sometime in the 2020s it will be. What then?
    http://www.theregister.co.uk/2015/03/18/the_future_of_solid_state_storage/

    The era of flash memory is anticipated to run out of road in the 2020s and newer technologies involving resistance and electron spin are poised to take over, delivering higher capacities, greater speed and DRAM-style addressability.

    Some people ask if one of these new technologies could actually unify dynamic memory (RAM) and non-volatile memories in a single universal memory tech.

    That seems far-fetched at the moment; just finding a working successor to NAND looks like achievement enough.

    Of the IT systems companies only IBM and HP are involved in fundamental research areas that have produced breakthroughs in the post-NAND area. The flash foundry operators, such as Samsung, Toshiba and Micron, are more interested in extending the life of NAND and their investment in NAND product processes than in replacing it with something else.

    They can’t ignore it, however. Micron has developed Phase-change memory (PCM) and has been manufacturing product until recently. Toshiba has looked into STT-RAM and Samsung has dabbled with STT-RAM as well.

    NAND flash memory is odd technology. It is non-volatile, of course, like tape and disk, but it is not byte-addressable. Unlike disk or tape it has to be written in blocks of bytes at a time, with each byte going into a cell.

    As NAND semi-conductor technology reaches the end of its process shrink road, with cells becoming progressively more error-prone and short-lived below 10nm in size, alternative post-NAND technologies are being examined to see if they can deliver the increased density (capacity) that is needed without carrying NAND’s disadvantages.

    Users will always want more capacity and more speed, and post-NAND technologies will be needed. Two or three years ago it was thought that the post-NAND future was going to arrive quickly but TLC (three-layer cell) and 3D NAND technology is pushing back the end of the NAND era to the 2020s.

    There are three main post-NAND development thrusts: phase-change memory (PCM), resistive RAM (RRAM) and spin transfer torque RAM (STT-RAM).

    Reply
  47. Tomi Engdahl says:

    Devs don’t care about cloud-specific coding, right? Er, not so
    Loving the data centre like it was home
    http://www.theregister.co.uk/2015/03/11/data_centre_cloud_apps_do_devs_pay_attention/

    Software application developers don’t care about the back-end guts and architectures that serve their programs much, right?

    While the notion that programmers are indeed some cerebral bunch of innovators only focused on functionality, the wider sentiment here is starting to change.

    Of course, developers think about the data centre back end and the servers that drive it; this is what the cloud is, after all. Who doesn’t now consider themselves to be a cloud developer and a mobile (served from the cloud) developer?

    The trouble comes down to a number of factors. Data centres as citadels of the cloud have, until very recently, failed to represent an immediate access route to physical programming tools.

    Microsoft itself has only this year explained how its transition from Team Foundation Server to Visual Studio Online has been undertaken to get programmers using real cloud-based tools.

    That’s programming in the cloud, for the cloud, on the cloud.

    This necessitates a certain leap of faith as aspects of control logic become separated from physical computing infrastructures, but this is where costs come down. Follow the logic onwards i.e. when application service costs come down, the cost of application development comes down and the cost of applications themselves (may even) come down.

    Working on cloud data centre-based application development does mean some things are different. Remember, everything requires a log in or connection, so aspects of identity (from both a user and code component perspective) suddenly become even more important.

    So is moving to cloud programming that radical a revolution? It is the detail that really matters – many on premises application components are seemingly identical to their cloud data centre cousins, they just differ in terms of their implementation because they scale differently and benefit from (usually better) dynamic allocation controls.

    Reply
  48. Tomi Engdahl says:

    Delving into Office 2016: Microsoft goes public with new preview
    Next version of Office for Windows is unveiled
    http://www.theregister.co.uk/2015/03/18/delving_into_office_2016_microsoft_goes_public_with_new_preview/

    Microsoft’s Office team is in overdrive, delivering new versions this year for Android, Mac (in preview), Windows 10 Universal App Platform (in preview), and now Windows desktop, also in preview.

    The public preview was announced at Microsoft’s Convergence event under way in Atlanta.

    According to VP Kirk Koenigsbauer, the current build “doesn’t yet contain all the features we’re planning to ship in the final product,” which may explain why it seems so similar to Office 2013; more like a point upgrade than a major new edition.

    Reply
  49. Tomi Engdahl says:

    MSBuild Engine is now Open Source on GitHub
    http://blogs.msdn.com/b/dotnet/archive/2015/03/18/msbuild-engine-is-now-open-source-on-github.aspx

    Today we are pleased to announce that MSBuild is now available on GitHub and we are contributing it to the .NET Foundation! The Microsoft Build Engine (MSBuild) is a platform for building applications. By invoking msbuild.exe on your project or solution file, you can orchestrate and build products in environments where Visual Studio isn’t installed. For instance, MSBuild is used to build the .NET Core Libraries and .NET Core Runtime open source projects.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*