It seems that PC market seems to be stabilizing in 2016. I expect that the PC market to shrinks slightly. While mobile devices have been named as culprits for the fall of PC shipments, IDC said that other factors may be in play. It is still pretty hard to make any decent profits with building PC hardware unless you are one of the biggest players – so again Lenovo, HP, and Dell are increasing their collective dominance of the PC market like they did in 2015. I expect changes like spin-offs and maybe some mergers with with smaller players like Fujitsu, Toshiba and Sony. The EMEA server market looks to be a two-horse race between Hewlett Packard Enterprise and Dell, according to Gartner. HPE, Dell and Cisco “all benefited” from Lenovo’s acquisition of IBM’s EMEA x86 server organisation.
Tablet market is no longer high grow market – tablet maker has started to decline, and decline continues in 2016 as owners are holding onto their existing devices for more than 3 years. iPad sales are set to continue decline and iPad Air 3 to be released in 1st half of 2016 does not change that. IDC predicts that detachable tablet market set for growth in 2016 as more people are turning to hybrid devices. Two-in-one tablets have been popularized by offerings like the Microsoft Surface, with options ranging dramatically in price and specs. I am not myself convinced that the growth will be as IDC forecasts, even though Company have started to make purchases of tablets for workers in jobs such as retail sales or field work (Apple iPads, Windows and Android tablets managed by company). Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits.
All your consumer tech gear should be cheaper come July as shere will be less import tariffs for IT products as World Trade Organization (WTO) deal agrees that tariffs on imports of consumer electronics will be phased out over 7 years starting in July 2016. The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually.
In 2015 the storage was rocked to its foundations and those new innovations will be taken into wider use in 2016. The storage market in 2015 went through strategic foundation-shaking turmoil as the external shared disk array storage playbook was torn to shreds: The all-flash data centre idea has definitely taken off as a vision that could be achieved so that primary data is stored in flash with the rest being held in cheap and deep storage. Flash drives generally solve the dusk drive latency access problem, so not so much need for hybrid drives. There is conviction that storage should be located as close to servers as possible (virtual SANs, hyper-converged industry appliances and NVMe fabrics). The existing hybrid cloud concept was adopted/supported by everybody. Flash started out in 2-bits/cell MLC form and this rapidly became standard and TLC (3-bits/cell or triple layer cell) had started appearing. Industry-standard NVMe drivers for PCIe flash cards appeared. Intel and Micron blew non-volatile memory preconceptions out of the water in the second half of the year with their joint 3D XPoint memory announcement. Boring old disk disk tech got shingled magnetic recording (SMR) and helium-filled drive technology; drive industry is focused on capacity-optimizing its drives. We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being. Tape industry developed a 15TB LTO-7 format.
The use of SSD will increase and it’s price will drop. SSDs will be in more than 25% of new laptops sold in 2015. SSDs are expected to be in 31% of new consumer laptops in 2016 and more than 40% by 2017. The prices of mainstream consumer SSDs have fallen dramatically every year over the past three years while HDD prices have not changed much. SSD prices will decline to 24 cents per gigabyte in 2016. In 2017 they’re expected to drop to 11-17 cents per gigabyte (means a 1TB SSD on average would retail for $170 or less).
Hard disk sales will decrease, but this technology is not dead. Sales of hard disk drives have been decreasing for several years now (118 million units in the third quarter of 2015), but according to Seagate hard disk drives (HDDs) are set to still stay relevant around for at least 15 years to 20 years. HDDs remain the most popular data storage technology as it is cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon.
Because all-flash storage systems with homogenous flash media are still too expensive to serve as a solution to for every enterprise application workload, enterprises will increasingly turn to performance optimized storage solutions that use a combination of multiple media types to deliver cost-effective performance. The speed advantage of Fibre Channel over Ethernet has evaporated. Enterprises also start to seek alternatives to snapshots that are simpler and easier to manage, and will allow data and application recovery to a second before the data error or logical corruption occurred.
Local storage and the cloud finally make peace in 2016 as the decision-makers across the industry have now acknowledged the potential for enterprise storage and the cloud to work in tandem. Over 40 percent of data worldwide is expected to live on or move through the cloud by 2020 according to IDC.
Open standards for data center development are now a reality thanks to advances in cloud technology. Facebook’s Open Compute Project has served as the industry’s leader in this regard.This allows more consolidation for those that want that. Consolidation used to refer to companies moving all of their infrastructure to the same facility. However, some experts have begun to question this strategy as the rapid increase in data quantities and apps in the data center have made centralized facilities more difficult to operate than ever before. Server virtualization, more powerful servers and an increasing number of enterprise applications will continue to drive higher IO requirements in the datacenter.
Cloud consolidation starts heavily in 2016: number of options for general infrastructure-as-a-service (IaaS) cloud services and cloud management software will be much smaller at the end of 2016 than the beginning. The major public cloud providers will gain strength, with Amazon, IBM SoftLayer, and Microsoft capturing a greater share of the business cloud services market. Lock-in is a real concern for cloud users, because PaaS players have the ancient imperative to find ways to tie customers to their platforms and aren’t afraid to use them so advanced users want to establish reliable portability across PaaS products in a multi-vendor, multi-cloud environment.
Year 2016 will be harder for legacy IT providers than 2015. In its report, IDC states that “By 2020, More than 30 percent of the IT Vendors Will Not Exist as We Know Them Today.” Many enterprises are turning away from traditional vendors and toward cloud providers. They’re increasingly leveraging open source. In short, they’re becoming software companies. The best companies will build cultures of performance and doing the right thing — and will make data and the processes around it self-service for all their employees. Design Thinking to guide companies who want to change the lives of its customers and employees. 2016 will see a lot more work in trying to manage services that simply aren’t designed to work together or even be managed – for example Whatever-As-A-Service cloud systems to play nicely together with their existing legacy systems. So competent developers are the scarce commodity. Some companies start to see Cloud as a form of outsourcing that is fast burning up inhouse ITops jobs with varying success.
There are still too many old fashioned companies that just can’t understand what digitalization will mean to their business. In 2016, some companies’ boards still think the web is just for brochures and porn and don’t believe their business models can be disrupted. It gets worse for many traditional companies. For example Amazon is a retailer both on the web and increasingly for things like food deliveries. Amazon and other are playing to win. Digital disruption has happened and will continue.
Windows 10 is coming more on 2016. If 2015 was a year of revolution, 2016 promises to be a year of consolidation for Microsoft’s operating system. I expect that Windows 10 adoption in companies starts in 2016. Windows 10 is likely to be a success for the enterprise, but I expect that word from heavyweights like Gartner, Forrester and Spiceworks, suggesting that half of enterprise users plan to switch to Windows 10 in 2016, are more than a bit optimistic. Windows 10 will also be used in China as Microsoft played the game with it better than with Windows 8 that was banned in China.
Windows is now delivered “as a service”, meaning incremental updates with new features as well as security patches, but Microsoft still seems works internally to a schedule of milestone releases. Next up is Redstone, rumoured to arrive around the anniversary of Windows 10, midway through 2016. Also Windows servers will get update in 2016: 2016 should also include the release of Windows Server 2016. Server 2016 includes updates to the Hyper-V virtualisation platform, support for Docker-style containers, and a new cut-down edition called Nano Server.
Windows 10 will get some of the already promised features not delivered in 2015 delivered in 2016. Windows 10 was promised coming to PCs and Mobile devices in 2015 to deliver unified user experience. Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. The promise was same unified interface for PCs, tablets and smart phones – but it was only delivered in 2015 for only PCs and some tablets. Mobile Windows 10 for smart phone is expected to start finally in 2016 – The release of Microsoft’s new Windows 10 operating system may be the last roll of the dice for its struggling mobile platform. Because Microsoft Plan A is to get as many apps and as much activity as it can on Windows on all form factor with Universal Windows Platform (UWP), which enables the same Windows 10 code to run on phone and desktop. Despite a steady inflow of new well-known apps, it remains unclear whether the Universal Windows Platform can maintain momentum with developer. Can Microsoft keep the developer momentum going? I am not sure. In addition there are also plans for tools for porting iOS apps and an Android runtime, so expect also delivery of some or all of the Windows Bridges (iOS, web app, desktop app, Android) announced at the April 2015 Build conference in hope to get more apps to unified Windows 10 app store. Windows 10 does hold out some promise for Windows Phone, but it’s not going to make an enormous difference. Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater.
Older Windows versions will not die in 2016 as fast as Microsoft and security people would like. Expect Windows 7 diehards to continue holding out in 2016 and beyond. And there are still many companies that run their critical systems on Windows XP as “There are some people who don’t have an option to change.” Many times the OS is running in automation and process control systems that run business and mission-critical systems, both in private sector and government enterprises. For example US Navy is using obsolete operating system Microsoft Windows XP to run critical tasks. It all comes down to money and resources, but if someone is obliged to keep something running on an obsolete system, it’s the wrong approach to information security completely.
Virtual reality has grown immensely over the past few years, but 2016 looks like the most important year yet: it will be the first time that consumers can get their hands on a number of powerful headsets for viewing alternate realities in immersive 3-D. Virtual Reality will become the mainstream when Sony, and Samsung Oculus bring consumer products on the market in 2016. Whole virtual reality hype could be rebooted as Early build of final Oculus Rift hardware starts shipping to devs. Maybe HTC‘s and Valve‘s Vive VR headset will suffer in the next few month. Expect a banner year for virtual reality.
GPU and FPGA acceleration will be used in high performance computing widely. Both Intel and AMD have products with CPU and GPU in the same chip, and there is software support for using GPU (learn CUDA and/or OpenCL). Also there are many mobile processors have CPU and GPU on the same chip. FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. There was lots of interest in 2015 for using FPGA for accelerating computations as the nest step after GPU, and I expect that the interest will grow even more in 2016. FPGAs are not quite as efficient as a dedicated ASIC, but it’s about as close as you can get without translating the actual source code directly into a circuit. Intel bought Altera (big FPGA company) in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single package – possibly available in early 2016.
Artificial intelligence, machine learning and deep learning will be talked about a lot in 2016. Neural networks, which have been academic exercises (but little more) for decades, are increasingly becoming mainstream success stories: Heavy (and growing) investment in the technology, which enables the identification of objects in still and video images, words in audio streams, and the like after an initial training phase, comes from the formidable likes of Amazon, Baidu, Facebook, Google, Microsoft, and others. So-called “deep learning” has been enabled by the combination of the evolution of traditional neural network techniques, the steadily increasing processing “muscle” of CPUs (aided by algorithm acceleration via FPGAs, GPUs, and, more recently, dedicated co-processors), and the steadily decreasing cost of system memory and storage. There were many interesting releases on this in the end of 2015: Facebook Inc. in February, released portions of its Torch software, while Alphabet Inc.’s Google division earlier this month open-sourced parts of its TensorFlow system. Also IBM Turns Up Heat Under Competition in Artificial Intelligence as SystemML would be freely available to share and modify through the Apache Software Foundation. So I expect that the year 2016 will be the year those are tried in practice. I expect that deep learning will be hot in CES 2016. Several respected scientists issued a letter warning about the dangers of artificial intelligence (AI) in 2015, but I don’t worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it’s not ready for. How machine learning will affect your business? MIT has a good free intro to AI and ML.
Computers, which excel at big data analysis, can help doctors deliver more personalized care. Can machines outperform doctors? Not yet. But in some areas of medicine, they can make the care doctors deliver better. Humans repeatedly fail where computers — or humans behaving a little bit more like computers — can help. Computers excel at searching and combining vastly more data than a human so algorithms can be put to good use in certain areas of medicine. There are also things that can slow down development in 2016: To many patients, the very idea of receiving a medical diagnosis or treatment from a machine is probably off-putting.
Internet of Things (IoT) was talked a lot in 2015, and it will be a hot topics for IT departments in 2016 as well. Many companies will notice that security issues are important in it. The newest wearable technology, smart watches and other smart devices corresponding to the voice commands and interpret the data we produce - it learns from its users, and generate appropriate responses in real time. Interest in Internet of Things (IoT) will as bring interest to real-time business systems: Not only real-time analytics, but real-time everything. This will start in earnest in 2016, but the trend will take years to play out.
Connectivity and networking will be hot. And it is not just about IoT. CES will focus on how connectivity is proliferating everything from cars to homes, realigning diverse markets. The interest will affect job markets: Network jobs are hot; salaries expected to rise in 2016 as wireless network engineers, network admins, and network security pros can expect above-average pay gains.
Linux will stay big in network server marker in 2016. Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world’s busiest sites. Linux will also run many parts of out Internet infrastructure that moves the bits from server to the user. Linux will also continue to rule smart phone market as being in the core of Android. New IoT solutions will be moist likely to be built mainly using Linux in many parts of the systems.
Microsoft and Linux are not such enemies that they were few years go. Common sense says that Microsoft and the FOSS movement should be perpetual enemies. It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. Microsoft is already partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center.
Web browsers are coming more and more 64 bit as Firefox started 64 bit era on Windows and Google is killing Chrome for 32-bit Linux. At the same time web browsers are loosing old legacy features like NPAPI and Silverlight. Who will miss them? The venerable NPAPI plugins standard, which dates back to the days of Netscape, is now showing its age, and causing more problems than it solves, and will see native support removed by the end of 2016 from Firefox. It was already removed from Google Chrome browsers with very little impact. Biggest issue was lack of support for Microsoft’s Silverlight which brought down several top streaming media sites – but they are actively switching to HTML5 in 2016. I don’t miss Silverlight. Flash will continue to be available owing to its popularity for web video.
SHA-1 will be at least partially retired in 2016. Due to recent research showing that SHA-1 is weaker than previously believed, Mozilla, Microsoft and now Google are all considering bringing the deadline forward by six months to July 1, 2016.
Adobe’s Flash has been under attack from many quarters over security as well as slowing down Web pages. If you wish that Flash would be finally dead in 2016 you might be disappointed. Adobe seems to be trying to kill the name by rebranding trick: Adobe Flash Professional CC is now Adobe Animate CC. In practive it propably does not mean much but Adobe seems to acknowledge the inevitability of an HTML5 world. Adobe wants to remain a leader in interactive tools and the pivot to HTML5 requires new messaging.
The trend to try to use same same language and tools on both user end and the server back-end continues. Microsoft is pushing it’s .NET and Azure cloud platform tools. Amazon, Google and IBM have their own set of tools. Java is on decline. JavaScript is going strong on both web browser and server end with node.js , React and many other JavaScript libraries. Apple also tries to bend it’s Swift programming language now used to make mainly iOS applications also to run on servers with project Perfect.
Java will still stick around, but Java’s decline as a language will accelerate as new stuff isn’t being written in Java, even if it runs on the JVM. We will not see new Java 9 in 2016 as Oracle’s delayed the release of Java 9 by six months. The register tells that Java 9 delayed until Thursday March 23rd, 2017, just after tea-time.
Containers will rule the world as Docker will continue to develop, gain security features, and add various forms of governance. Until now Docker has been tire-kicking, used in production by the early-adopter crowd only, but it can change when vendors are starting to claim that they can do proper management of big data and container farms.
NoSQL databases will take hold as they be called as “highly scalable” or “cloud-ready.” Expect 2016 to be the year when a lot of big brick-and-mortar companies publicly adopt NoSQL for critical operations. Basically NoSQL could be seem as key:value store, and this idea has also expanded to storage systems: We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being.
In the database world Big Data will be still big but it needs to be analyzed in real-time. A typical big data project usually involves some semi-structured data, a bit of unstructured (such as email), and a whole lot of structured data (stuff stored in an RDBMS). The cost of Hadoop on a per-node basis is pretty inconsequential, the cost of understanding all of the schemas, getting them into Hadoop, and structuring them well enough to perform the analytics is still considerable. Remember that you’re not “moving” to Hadoop, you’re adding a downstream repository, so you need to worry on systems integration and latency issues. Apache Spark will also get interest as Spark’s multi-stage in-memory primitives provides more performance for certain applications. Big data brings with it responsibility – Digital consumer confidence must be earned.
IT security continues to be a huge issue in 2016. You might be able to achieve adequate security against hackers and internal threats but every attempt to make systems idiot proof just means the idiots get upgraded. Firms are ever more connected to each other and the general outside world. So in 2016 we will see even more service firms accidentally leaking critical information and a lot more firms having their reputations scorched by incompetence fuelled security screw-ups. Good security people are needed more and more – a joke doing the rounds of ITExecs doing interviews is “if you’re a decent security bod, why do you need to look for a job”
There will still be unexpected single points of failures in big distributed networked system. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop. That’s not a single point of failure in the classical sense but it’s really hard to manage unless you go for full cloud agnosticism – which is costly. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail. Oh, and no one seems to test their mission-critical data centre properly, because it’s mission critical- So they just over-specify where they can and cross their fingers (= pay twice and get the half the coverage for other vulnerabilities).
For IT start-ups it seems that Silicon Valley’s cash party is coming to an end. Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money could be over and valuation expectations are re-calibrating down. The cheap capital party is over. It could mean trouble for weaker startups.
933 Comments
Tomi Engdahl says:
Apple Expects Your iPhone To Expire In Three Years
http://www.forbes.com/sites/ewanspence/2016/04/16/what-is-the-life-expectancy-of-my-iphone/#16225e535c3c
There is no mythical ‘self-destruct’ chip inside Apple’s hardware, but Cupertino has acknowledged that it believes its hardware has a definitive life span. And the suggested life feels rather on the low side.
While there may be heroic devices out there with far longer life spans (and I’m writing this on one such MacBook Pro that’s approaching the six-year mark, albeit with some DIY replaced parts) these numbers tie in with much of Apple’s strategy, including the available of OS updates to older devices and the hardware offered to replace older units.
Apple does provide compatibility of its operating systems to older hardware, but the details released in the environmental pages codifies the length of support that hardware can expect. While the smartphone industry is built around regular purchases of new hardware thanks to the carrier contracts (currently at the two-year mark), laptops and desktops do not have the same formal structure. At least not one that Apple has previously acknowledged.
Tomi Engdahl says:
Intel XPoint emperor has no clothes, only soiled diapers
Micron’s deafening XPoint silence
http://www.theregister.co.uk/2016/04/15/intel_xpoint_emperor_has_no_clothes/
Intel’s XPoint marketing is such frenetic, hype-filled BS that it is setting up the world to be utterly underwhelmed by the XPoint reality.
We have had a mini deluge of XPoint news recently, with Frank T. Hady, Intel Fellow and Chief 3D XPoint Storage Architect giving a pitch at the 7th Annual Non-Volatile Memories Workshop 2016 at UC San Diego, and with Intel demo’ing an Optane SSD at its IDF 2016 in Shenzhen, China.
Yet from Micron, listen as hard as we can, all we hear is a deadening silence. Why is this?
It’s the way the Intel-Micron joint venture is structured. According to people not a million miles away from Intel and Micron XPoint activities, their XPoint JV is 51 per cent owned by Micron, and 49 per cent by Intel. Micron has the right to buy out Intel’s share, but Intel doesn’t have a reciprocal right. This asymmetry affects the marketing/PR side of the XPoint JV as well, with Intel allowed to do as it’s doing and Micron effectively hobbled for some period of time.
Now, this “1,000 x” market positioning, as in XPoint being 1,000 times faster than flash and having 1,000 times more endurance. There will be venerable flash drives that are slow enough and short-lived enough to justify these ratios, but not enterprise-class stuff that’s available today, and the latest XPoint performance stats revealed by Intel show this.
Latest XPoint stats
An examination of the reported Hady presentation and the IDF Shenzhen demo revealed these XPoint Optane gen 1 numbers:
20nm process
SLC (1 bit/cell)
7 microsec latency, or 7,000 nanoseconds
78,500 (70:30 random) read/write IOPS
NVMe interface
Well, at last, real numbers. So XPoint is 1,000 times faster than SSDs, with an Intel PC3700 PCIe flash card having a latency of 85 microseconds; yeah, right, prepare to be under-freaking-whelmed by XPoint’s latency.
It is only 12 times faster than a modern Intel PCIe flash card, 16 times faster than a Micron NVMe 7100 or 9100 flash drive’s read latency, and a mere six times faster than said drives’ write latency.
So here is a bomb detonated under the XPoint-is-1,000-times-faster claim, which is shown to overstate the speed difference tenfold.
Tomi Engdahl says:
views of cloud service providers are excellent, the research house IDC predicts cloud IT infrastructure investments to grow 18.9 percent this year.
This development increased the sales of traditional IT infrastructure systems is reduced by four per cent this year, although the traditional systems continue to be the strongest. They account for 62.8 per cent of end-users of all investments.
14.1 per cent of all cloud-infrastructure investment is investing in the public cloud. Private cloud represents 11.1 per cent.
ElasticHostsin CEO Richard Davies says customers especially benefit from the next generation of container technology that allows scalability of IT services will be improved.
“The public cloud, customers pay only for the computing services, is therefore no longer idle,”
The cloud-infrastructure represent the nobility of the Ethernet ports, with investments increasing by 26.8 per cent during the next 12 months.
Servers and storage devices is a growth rate of 12.4 and 11.3 per cent.
Source: http://www.tivi.fi/CIO/pilvi-infran-myynti-kasvaa-lahes-viidenneksen-tana-vuonna-6542309
Tomi Engdahl says:
Casey Newton / The Verge:
Facebook’s David Marcus on bots rollout: bots should respond in under 5 seconds, server load may be to blame for latency; new ecosystem takes time to get right
Facebook Messenger’s David Marcus on the rocky rollout of bots
How Facebook Messenger sees the future
http://www.theverge.com/2016/4/18/11422278/facebook-messenger-bots-david-marcus-interview
The search for the killer bot is well underway in Silicon Valley — but it’s off to a rocky start. Microsoft’s big push into artificial intelligence began with Tay, the teen-mimicking chatbot that Twitter users turned into a crazy racist in record time. Facebook’s introduction of bot-building platform this week at its F8 developer conference went more smoothly. But early adopters have complained about the bots’ mysterious user interfaces, their aggressive messaging, and the fact they don’t seem all that much smarter than a Microsoft Office wizard from the 1990s.
Tomi Engdahl says:
Minecraft: Education Edition launches beta in May
http://www.wired.co.uk/news/archive/2016-04/15/minecraft-education-edition-beta-launches-may
Microsoft has revealed it will launch a beta for its educational Minecraft splinter in May, allowing teachers to use the phenomenally popular sandbox game in the classroom.
Announced on the Minecraft blog, the beta will encompass more than 100 schools in 30 countries around the world, allowing educators to provide feedback on the project and help develop a final version and “fine-tune the experience across a diverse set of learning environments.”
The freeform nature of Minecraft makes it highly adaptable to lessons in many subjects, from accurate (if blocky) recreations of historical sites, through to molecular science
Tomi Engdahl says:
Natalie Gagliordi / ZDNet:
IBM beats Q1 earnings targets with quarterly revenue of $18.7B; strategic businesses including cloud and analytics grew revenue 14% year-over-year — IBM beats Q1 earnings targets on double-digit cloud growth — IBM said quarterly revenue from its strategic businesses including cloud …
IBM beats Q1 earnings targets on double-digit cloud growth
http://www.zdnet.com/article/ibm-beats-q1-earnings-targets-on-double-digit-cloud-growth/
IBM said quarterly revenue from its strategic businesses including cloud and analytics increased 14 percent year-over-year and now represent 37 percent of the company’s revenue.
IBM’s cloud revenue climbed 36 percent to $10.8 billion over the last 12 months.
Tomi Engdahl says:
How Microsoft is rethinking the way it sells
http://www.zdnet.com/article/how-microsoft-is-rethinking-the-way-it-sells/
The ‘new’ Microsoft mission statement and focus on ‘growth mindset’ are having impacts on how the company is looking at selling products and services.
We Microsoft watchers have heard lots of talk about CEO Satya Nadella’s “cultural transformation” at the company.
Sure, it all sounds nice. But I didn’t think changes in the company’s latest mission statement (“to empower every person and every organization on the planet to achieve more”) and the focus on “growth mindset” had any measurable implications in terms of products and sales strategies. And, after all, that’s what really matters to Microsoft customers outside the company.
Tomi Engdahl says:
Digital priorities for the CIO in 2016
http://www.zdnet.com/article/digital-priorities-for-the-cio-in-2016/
In an era of fast change, disruptive tech, and too many priorities, what will the CIO really need to focus on this year to deliver combined technical and business leadership that will guide their organizations into the future?
In 2016, the digital revolution has finally become an unavoidable priority for the majority of executives in the typical organization today. For years it was possible — even sensible — for senior leaders to ‘kick the can’ on responding to technology disruption. The calculus was that they could realistically put it off for three to five years, often long enough to be onto their next position, or retirement, before having to tackle the hard, risky, and unfamiliar world of digital business.
Most organizations were aware this time was coming, and I suggested last year that IT had ‘one last chance’ to lead digital transformation, before other parts of the business responded, or perhaps more likely, nimble new digital entrants reinvented their industry and starting taking away real market share with surprising rapidity.
There’s no question that today, this year, the CIO — along with the CEO and often the CMO and the new Chief Digital Officer — is currently on the hot seat to produce credible results with more profound and effective digitization of the business
However, digital transformation is about far more than developing sharing economy business models — where the disruption starts with dramatically lower operational and capital expenditures and often ends with a radically better digital customer experience — but also a whole host of changes required to both become and receive the benefits of being a digital native organization. That’s because digital native companies — typically because they’re newer and can start green field while attracting the best and brightest who want to use the latest advances — use different, more contemporary technologies, processes, organizational models, and techniques for innovation, operations, and growth.
Tomi Engdahl says:
Adam Conner-Simons / MIT News:
MIT’s new AI platform, which incorporates input from human experts, can predict 85% of cyberattacks, which is 3x better than previous benchmarks — System predicts 85 percent of cyber-attacks using input from human experts — Virtual artificial intelligence analyst developed …
System predicts 85 percent of cyber-attacks using input from human experts
http://news.mit.edu/2016/ai-system-predicts-85-percent-cyber-attacks-using-input-human-experts-0418
Virtual artificial intelligence analyst developed by the Computer Science and Artificial Intelligence Lab and PatternEx reduces false positives by factor of 5.
Today’s security systems usually fall into one of two categories: human or machine. So-called “analyst-driven solutions” rely on rules created by living experts and therefore miss any attacks that don’t match the rules. Meanwhile, today’s machine-learning approaches rely on “anomaly detection,” which tends to trigger false positives that both create distrust of the system and end up having to be investigated by humans, anyway.
But what if there were a solution that could merge those two worlds? What would it look like?
In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the machine-learning startup PatternEx demonstrate an artificial intelligence platform called AI2 that predicts cyber-attacks significantly better than existing systems by continuously incorporating input from human experts. (The name comes from merging artificial intelligence with what the researchers call “analyst intuition.”)
The team showed that AI2 can detect 85 percent of attacks, which is roughly three times better than previous benchmarks, while also reducing the number of false positives by a factor of 5. The system was tested on 3.6 billion pieces of data known as “log lines,” which were generated by millions of users over a period of three months.
Creating cybersecurity systems that merge human- and computer-based approaches is tricky, partly because of the challenge of manually labeling cybersecurity data for the algorithms.
Tomi Engdahl says:
Pentagon CIO: Services all in agreement on Windows 10 goal
http://fedscoop.com/dod-cio-services-all-in-agreement-on-windows-10-goal
Despite reports, the services are on board with the DODwide Windows 10 transition by the end of January, Halvorsen said. He also said DOD has its eye on the money with data center consolidations.
All four military services and agencies across the Defense Department are in agreement with a tight timetable to move to Microsoft Windows 10, Pentagon CIO Terry Halvorsen told reporters Friday.
There had been reports that individual military branches doubted the feasibility of making the transition by Jan. 31, 2017, as ordered by Deputy Secretary Bob Work in February.
But Halvorsen said that’s not the case.
“[Army CIO Robert] Ferrell and I are in 100 percent agreement, as are the leadership of the Army and the [DOD] secretary, that our role remains to get Windows 10 in a year,” Halvorsen said.
DOD has never before had “an operating system that had this much security baked in from the beginning,” he said. “If you’re using a computer at home and you’re not on Windows 10, you’re doing yourself an injustice — you ought to be moving to Windows 10.”
US Government Wants Its Employees to Install Windows 10 on Home PCs
Read more: http://news.softpedia.com/news/us-government-wants-its-employees-to-install-windows-10-on-home-pcs-503029.shtml#ixzz46GwgrMsK
Tomi Engdahl says:
PC has for several decades been the main driver of the semiconductor industry and the sales of components in the engine. Now Intel announces its withdrawal of up to 12 thousand its work force and changing the PC processor company cloud, mobile and manufacturer of processors IoT devices.
These growth areas were able last year to Intel for USD 2.2 billion in growth, and the majority of the company’s operating profit. At the same time new areas largely patched their PC processors into the sales decline.
Intel’s decision to turn its back on the traditional PC processor, however, is dramatic
Source: http://etn.fi/index.php?option=com_content&view=article&id=4284:intel-sanoo-hyvastit-pc-lle&catid=13&Itemid=101
Tomi Engdahl says:
DistroWatch keep track of the various Linux distributions support site visitor numbers. It can be concluded that Linux is the most popular at any given time. Now it seems that Ubuntu is taking a number one state for a long time the most popular titles of possession Linux Mint.
DistroWatch not analyze the reasons for the trends, but the Mint and Ubuntu the reasons are pretty obvious. Mint popularity suffered a severe blow when the distribution of the charging websites hacked.
Ubuntu’s popularity rise again explained by the fact that the platform is now being published on Thursday, the latest, order receiving six long-term support version. 16.04-upgrade version of the Linux kernel (version 4.4), the Unity interface (version 7), as well as a large number of ready-made platform for applications.
The latest version of Ubuntu codenamed Xenial Xerus.
Source: http://etn.fi/index.php?option=com_content&view=article&id=4281:ubuntu-nousemassa-linuxien-karkeen&catid=13&Itemid=101
Tomi Engdahl says:
Intel literally decimates workforce: 12,000 will be axed, CFO shifts to sales
While banking a $2bn profit in first three months of the year
http://www.theregister.co.uk/2016/04/19/intel_q1_fy2016_job_cuts/
Intel will axe 12,000 employees globally – more than one in ten of its workforce – as it moves further away from being a PC chip company.
The layoffs are among the biggest into the company’s history, and come as PC industry continues to tank harder than Intel expected.
The Santa Clara-based biz sees a lot of growth in the worlds of data centers, memory, and the internet of things – anything that doesn’t look like a traditional desktop computer, the sales of which are dwindling. As a result, fewer processors for normal PCs, laptops and tablets are needed, and so Intel is rejigging itself to focus more on these growth areas – which will mean losing some workers.
“We’re seen as a PC company. It’s time to make a transition to push the company all the way over to our new strategy,” Intel CEO Brian Krzanich told analysts on a conference call on Tuesday.
The processor giant said about 11 per cent of its 107,000 staffers will be shed through “site consolidations worldwide, a combination of voluntary and involuntary departures, and a re-evaluation of programs.”
Intel said the decimation will save it $750m this year and deliver “annual run rate savings of $1.4 billion by mid-2017.”
Tomi Engdahl says:
So you’d sod off to China to escape the EU, Google? Really?
Baidu awaits, then
http://www.theregister.co.uk/2016/04/19/google_eu_analysis/
Google structures its entire organisation to avoid privacy laws, minimise taxes and de-risk itself from competition oversight*. Today Google’s European supremo hinted that being in China might be less of a hassle, and that losing Google would serve us Europeans right for being so backward.
Of course, it’s a sheer coincidence that Google exec Matt Brittin’s aggressive comments follow the news that the European Commission is likely to file a formal complaint into the bundling of Google applications with its dominant Android platform this week.
“If the services and products they are using are not made in Europe then they will be made in China, and Asia-Pacific and Silicon Valley, and that will be a big missed opportunity,” Brittin told the FT in a tirade against ignorant European regulators, who aren’t “digital” enough. (That’s the FT’s shorthand for what he was getting at).
The reason for Brittin’s hostility was the EU’s “red tape”. By “red tape” we presume he means things like taxation, privacy and competition law. The kind of “red tape” that is harder for some to avoid than others.
Tomi Engdahl says:
John Ribeiro / Computerworld:
Mesosphere makes its data center management software, DC/OS, open source
Mesosphere open-sources data center management software
http://www.computerworld.com/article/3058287/data-center/mesosphere-open-sources-data-center-management-software.html
The startup is backed in this move by over 60 tech companies, including Hewlett Packard Enterprise and Microsoft
Derived from its Datacenter Operating System, a service that Mesosphere set out to build as an operating system for all servers in a data center as if they were a single pool of resources, the open-source DC/OS offers capabilities for container operations at scale and single-click, app-store-like installation of over 20 complex distributed systems, including HDFS, Apache Spark, Apache Kafka and Apache Cassandra, the company said in a statement Tuesday.
DC/OS is built around the Apache Mesos kernel for distributed tools including analytics, file systems and Web servers; Mesosphere founder Benjamin Hindman and colleagues at the University of California, Berkeley developed DC/OS in 2009.
But the minimalist approach used to develop Mesos proved inadequate when it came to running most applications as other functionality such as service discovery, load balancing, user/service authentication and authorization, and command-line and user interfaces had to come from components that run alongside or on top of Mesos, Hindman said in a blog post.
“By open sourcing DC/OS we’re enabling organizations of all sizes to harness the same computing infrastructure as the Twitters and Apples of the world,”
While some of the technologies in DC/OS such as Mesos were already open source, others such as the GUI and the Minuteman load balancer were proprietary technologies developed by Mesosphere.
Some of the components that Mesosphere built as part of its Data Center Operating System, and are included in DC/OS, are Marathon, a container orchestrator platform; Universe, which provides an app store-like experience for deploying distributed systems and additional management components; tools for operating the DC/OS from the Web or command line; and GUI-based installers for on-premises and cloud.
Tomi Engdahl says:
Austin Walker / Giant Bomb:
Sources: upgraded PlayStation 4 is codenamed NEO, contains stronger CPU, GPU, and faster RAM to support 4K output; there will be no NEO-exclusive games — Sources: The Upgraded PlayStation 4 is Codenamed NEO, Contains Upgraded CPU, GPU, RAM — Though the NEO will offer greater visual fidelity …
Sources: The Upgraded PlayStation 4 is Codenamed NEO, Contains Upgraded CPU, GPU, RAM
http://www.giantbomb.com/articles/sources-the-upgraded-playstation-4-is-codenamed-ne/1100-5437/
Though the NEO will offer greater visual fidelity than the original PS4, Sony is taking measures not to split their user base in two.
Tomi Engdahl says:
GitLab-Digital Ocean partnership to provide free hosting for continuous online code testing
http://techcrunch.com/2016/04/19/gitlab-digital-ocean-partnership-to-provide-free-hosting-for-continuous-online-code-testing/
GitLab.com users have always had free access to the Gitlab programmer’s toolset including GitLab Runner. Today it announced a new feature called Gitlab Runner Autoscale that allows continuous code testing at scale, and to make that even more palatable, the company teamed up with cloud infrastructure provider DigitalOcean to provide free hosting for testing that code.
Yes, it’s really free.
“We are introducing Gitlab runner Autoscale [and we are teaming with DigitalOcean] to provision more servers as you need them. We will make sure there are enough machines to run your code,” Sytse ‘”Sid” Sijbrandij, CEO of GitLab told TechCrunch.
“Companies can go from zero if nobody is pushing code to hundreds of runners if everyone is submitting code. They can hook into a DigitalOcean account and provision as many servers as they need. Because it’s fast, they never start in a queue. It’s secure because you deprecate the server when you’re done testing. And it’s cost-effective because you don’t have to run the servers the entire time,” Sijbrandij explained.
Tomi Engdahl says:
Mario Anima / Google Docs Blog:
Google Keep gets updated with label support for organizing notes; now you can easily save web links and images to Keep from Android, iOS, and desktop Chrome
Stay on task with today’s updates in Google Keep
https://docs.googleblog.com/2016/04/stay-on-task-with-todays-updates-in.html
How many times have you found yourself with a great idea, but no easy way to jot it down for later? Or maybe you’ve got lots of notes scattered around, without no central spot to find them. Having a single place to capture what’s on your mind and save your ideas and to-do lists is what Google Keep is all about, and today’s updates give you a few new ways to collect and manage the information that’s important to you.
The next time you’re on a website that you want to remember or reference later on, use the new Keep Chrome extension to add it—or any part of it—to a note in Keep. Just click the Keep badge to add a site’s link to a note, or select some text or an image and create a new note from the right-click menu.
Same goes for Android
Organize your thoughts with #Labels
One of your top asks has been for a way to organize and categorize notes, and now it’s as easy as using a #hashtag.
https://www.google.com/keep/
Tomi Engdahl says:
Samit Sarkar / Polygon:
Microsoft stops manufacturing new Xbox 360 consoles, will continue to sell remaining inventory and support existing consoles
Microsoft ending Xbox 360 production
It’s not going anywhere, though
http://www.polygon.com/2016/4/20/11468032/xbox-360-production-ending-microsoft
Microsoft has stopped manufacturing new Xbox 360 consoles, the company announced today.
“Xbox 360 means a lot to everyone in Microsoft,” said Phil Spencer, head of Xbox. “And while we’ve had an amazing run, the realities of manufacturing a product over a decade old are starting to creep up on us.”
Spencer added that Microsoft “will continue to sell existing inventory of Xbox 360 consoles, with availability varying by country.” The Xbox 360′s current retail price is $199.99, in a bundle with a 500 GB system and a copy of Forza Horizon 2.
Microsoft launched the Xbox 360 on Nov. 22, 2005, in North America; the console turned 10 years old last November
Tomi Engdahl says:
Google teams with Iron Mountain for LTO-to-cloud migration
Tape still not dead: it will die in the year N where N is this year plus 1
http://www.theregister.co.uk/2016/04/21/google_cloud_lto_to_cloud_migration/
Google and Iron Mountain are trying to hasten the never-quite-imminent death of tape as a storage medium with an LTO-to-cloud migration collaboration.
LTO – linear tap open for those among you not enamoured of rusty ribbons – is a standard tape format that counts IBM, HP and Quantum among its backers. A single seventh-generation LTO cartridge can store 15 terabytes and the standard plans a tenth-generation cartridge packing 120 terabytes.
Google, however, operates a service called Cloud Storage Nearline that starts at US$0.01 per gigabyte per month and promises three-second restore times. That’s rather faster than even a high-end tape library will achieve.
There’s just one problem with Nearline and that’s the loooong time needed to move a terabyte of data from tape to the cloud. Google’s addressed that problem with “cloud seeding”, the practice of finding partners with fat pipes to its bit barns and the kit to arrange uploads. One such partner, Iron Mountain, has just announced it’s increased the bandwidth from its bit barns to Google’s by a factor of ten. That increase, Google says, means “moving 50TB of data over the expanded link takes less than a day.”
Google gives away 100 PETABYTES of storage to irritate AWS
Nearline service goes live, creates nightmare for tape vendors
http://www.theregister.co.uk/2015/07/24/google_gives_away_100_petabytes_of_storage_to_irritate_aws
Tomi Engdahl says:
Microsoft lures IT pros with breadcrumb trail of candy to its cloud
We didn’t build all this for you to goof off on AWS
http://www.theregister.co.uk/2016/04/20/microsoft_cloud_rains_gifts/
Microsoft has offered up a grab bag of goodies for IT administrators looking to add cloud skills to their resume, including free trials of Azure and Office 365, plus support and training credits, along with some career advice.
“We are in the middle of the cloud technology transition, and IT professionals are not always leading this transition,” said Mike Neil, Redmond’s veep of enterprise cloud.
“To capture this opportunity, IT professionals need to rapidly familiarize themselves with cloud technologies, and evolve their skills.”
In other words, we’ve been building out all these cloud services and now we need to make sure enough people use them. To tempt techies over to its systems, Microsoft has set up a certification program, dubbed IT Pro Cloud Essentials, and is offering a year’s free subscription.
The package includes $100 of free Azure credits per month for the first three months if you sign up before September 30, and a long-term lower price for access to the cloud network.
It’s a reasonable package for an admin aspiring to pick up some cloud skills, and Microsoft has told The Reg that it doesn’t expect to charge for the package next year and the same freebies should apply.
Tomi Engdahl says:
Ubuntu 16.04 LTS arrives today complete with forbidden ZFS
The Xenial Xerus want to get cloudy, but first let battle be joined in GPL hell!
http://www.theregister.co.uk/2016/04/21/ubuntu_16_04_lts_launched/
Canonical will today (April 21st) launch version 16.04 of its Ubuntu Linux distribution, Xenial Xerus, the new long-term-support version of project.
As the name suggests, long-term support versions of Ubuntu get long-term support, a guaranteed five years from today to be precise. The Xenial Xerus will therefore be fed, watered and de-loused for years to come, making them a fine platform for serious endeavours.
Canonical thinks the Xerus are ideal for cloudy, containerised computing. ZFS is pitched as of several features that make the distribution ideal for such roles. The LXD hypervisor is central to Canonical’s containerisation ambitions, as it offers greater speed and density for guests and therefore makes the containers-inside-lightweight-VMs play possible. The inclusion of Ceph adds scale for storage, which helps Ubuntu to take on weightier tasks with or without OpenStack.
Tomi Engdahl says:
Open-Source Project Secretly Funded by CIA
http://www.linuxjournal.com/content/open-source-project-secretly-funded-cia
It’s fair to say that the interests of governments and the FOSS community are not always aligned. That’s not to say that the US government is out to crush every FOSS project or that every FOSS user is on a secret mission to destroy the government. Nonetheless, the relationship is often a strained one.
So it shouldn’t be surprising that the Open Source community gets a little restless when it learns that the government has its hands in an open-source project—particularly when we discover it’s secretly pouring money into the pockets of developers to develop features it requires. And, when the government agency in question is the CIA—well, you can understand why some feathers are rustled.
It shouldn’t be surprising to learn that the CIA is a big investor in tech development. After all, if there’s one thing we’ve learned from spy movies and TV, it’s that spies love their gadgets.
If there’s a suitable commercial project in development, the answer is venture capital. The CIA has its own venture capital branch called In-Q-Tel. In-Q-Tel’s mission is to get the required technology into the hands of the CIA’s analysts and agents as soon as possible. It does that by using its money to support the R and D costs of public companies who are working on similar products.
Of course, as Silicon Valley continues to embrace open source, that means a number of open-source projects actually are funded by the CIA. Docker is one example of a high-profile open-source firm that was secretly funded by the CIA.
Given the recent FBI demands to insert back doors into iPhones to “help investigate criminals”, you can understand why some privacy advocates are worried as to how much control the CIA exerts over some of these projects.
Of course, adding a back door to Docker would be quite hit-and-miss as a spying strategy. It seems more likely to me that the CIA wants to steer the project to meet its own container needs
But even if spying on end users isn’t the goal, another concern is that projects like Docker could be steered in the wrong direction
Tomi Engdahl says:
Why you should pay employees for free-time Open Source contributions
http://futurice.com/blog/year-2015-in-company-sponsored-open-source
Late in the year 2014 we at Futurice began to sponsor employee free time open source contributions. What has happened since? Let’s take a look at the year 2015.
Our sponsorship model has remained the same. €15 paid per reported hour, max 30 hours monthly. Anything goes, as long as it is Open Source.
In 2015 we paid sponsorship money based on 400 reports by 60 different people, working on 150 different projects, spending a sum of 2500 reported hours. This cost the company €37500. In 2015 Futurice had about 240 employees.
Competence development
We did a learning survey recently, focusing on these sponsored free time (open source) activities.
Recruitment
We now systematically collect information in the recruitment process to help us assess the impact of different things, such as the open source sponsorship. This is fairly new and we don’t have enough data yet to publish any results.
Here’s an actual quote from our lead recruiter that I have not asked for permission to use:
After two days at a job fair in London, the Spice Program works really well in our favour in attracting developers. Since the job market here is challenging and relies heavily on headhunters, it’s of great value.
– Tuomas Paasonen, Lead Recruiter, Futurice
Employee engagement
We had the pleasure to provide data and contacts to Erik Stenberg’s graduation work for the Hanken School of Economics last year with the apt title: “Spicing up employee engagement – A case study of an open source program”.
Erik’s study looked into how work-related extra-role activities, such as that exemplified by the Spice Program, could affect employee engagement.
Erik found the following positive effects to employee engagement:
An improved sense of membership in the work community
Alignment with the values promoted by the program
Personal attachment to the program identity
Competence improvement and increasing confidence
Autonomy to pursue interests and needs
Signals of approval from the employer
Tomi Engdahl says:
Peter Kafka / Re/code:
A summary of Bill Gurley’s big warning about Silicon Valley’s big money troubles
We read Bill Gurley’s big warning about Silicon Valley’s big money troubles so you don’t have to
http://recode.net/2016/04/21/bill-gurley-unicorn-funding-essay/
Last night, Benchmark VC Bill Gurley posted a 5,700-word piece sounding the alarm about the state of over-funded Silicon Valley companies and the investors who over-funded them. It’s going to be the talk of the tech world today, so if you haven’t read it yet, you’d better get started ASAP.
What’s that? You have a day job? You don’t have time to read a 5,700-word piece? Even if it’s only 5,681 words?
No worries. Part of my day job involves summarizing 5,681-word pieces. Here you go.
What’s the big picture?
Gurley says too many Silicon Valley companies have raised too much money, and now they’re in trouble. The same goes for the investors who gave them all that money. “Times are changing,” he tweets.
That sounds familiar.
That’s because Gurley has been saying too many Silicon Valley companies have raised too much money, and will be in trouble, for a couple of years now. And every time he says it, it generates a lot of attention.
So what does that mean for the tech company I work at or invested in?
This is the most important part of Gurley’s essay: He sketches out a scenario in which companies that have gotten used to easy money but have yet to build a business that makes money, find that they need to raise more money — and that the easy money is gone.
Now, they may have to raise money at valuations below their previous marks — the “down rounds” you’ve heard whispers about for some time.
Tomi Engdahl says:
The US Government and Open-Source Software
http://www.linuxjournal.com/content/us-government-and-open-source-software?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29
As part of the “Second Open Government National Action Plan”, the federal government is planning to share the source code behind many of its software projects.
To begin with, the plans call for federal agencies to share code with each other. This will help reduce development costs when government departments each work on the same functionality independently. Solving the same problem twice (or more often) is expensive and a waste of taxpayer’s money.
What’s more, sharing source code between government departments makes it easier for those departments to collaborate, which again reduces the expense to the tax-payer. Bugs can be discovered and fixed faster, and software designed by different departments will be based on the same underlying technology. In theory, that should make these systems more compatible with each other.
Sharing source code between different parts of the government makes a lot of sense. In fact, it’s a policy that should have been passed a long time ago, but it’s not quite what most of us mean when we say “open-source” software.
The proposal also has a portion that relates to open-source software in the traditional sense of the term. It states that up to 20% of the custom code written by the federal government each year should be shared with the public.
Tomi Engdahl says:
Slashdot Asks: Is the Golden Era of Video-Game Console Sales Over?
https://games.slashdot.org/story/16/04/21/1758235/slashdot-asks-is-the-golden-era-of-video-game-console-sales-over?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29
Microsoft announced on Wednesday that it has stopped producing Xbox 360, a gaming console it launched in 2005. According to estimations, the company sold more than 85 million Xbox 360 units worldwide. Quartz has an insightful story today, in which it compares the shipment numbers of Xbox One and the PlayStation 4, the current generation consoles, to conclude that the “golden era” of video-game console sales is over.
THINKING OUTSIDE THE BOX
The golden era of video-game console sales is over
http://qz.com/666299/the-golden-era-of-video-game-console-sales-is-over/
Only the first two Sony PlayStation consoles, and the Nintendo Wii, a console that challenged the concept of how people play video games, and who plays them, have shipped more units than the Xbox 360.
Microsoft’s follow-up console, the Xbox One, has not sold nearly as well as the 360. In 2008, less than three years after it was launched, the company said the 360 had sold over 19 million units worldwide. The Xbox One was released in 2013, and has sold about 10 million units in roughly the same amount of time as its predecessor.
Tomi Engdahl says:
Managing infrastructure, a newbie’s guide: Simple stuff you need to know
Even if you do know it all already, it never hurts to refresh the basics
http://www.theregister.co.uk/2016/04/07/the_steps_from_legacy_to_today/
We all have IT and telco infrastructure equipment that’s getting older. Time marches on and few of us have the funds or resources to renew everything when it reaches its official point of being written off by the bean-counters.
Here’s a seven-step guide to dragging your infrastructure kicking and screaming into the current century.
1. Draw a hardware obsolescence timeline
2. Standardise your networking
3. Look to the cloud
4. Integrate your applications
5. Virtualize your servers and storage
6. Consolidate to fewer suppliers
7. Don’t be scared to push the boundaries
Finally, we’ve all heard horror stories about companies that have adopted “bleeding-edge” technology and have been bitten on the bum. But how often is that actually true?
organisation that had two distinct halves: the “we must provide a stable service” half which insisted on working with established technology; and the “we want newer, faster NOW” half which pushed the boundaries.
In fact, the latter was more stable: they responded more quickly when a software patch came out; they were the only part of the organisation not to be clobbered by broadcast storms because they had a routed, segmented network; and they had high-speed WAN connectivity first by doing some innovative tunnelling instead of waiting for the rare-as-hen’s-teeth “approved” router to arrive.
What we now call “tomorrow” will soon be renamed “today”.
Tomi Engdahl says:
To SQL or NoSQL? That’s the database question
But as technological lines blur, there’s not always a clear-cut answer.
http://arstechnica.com/information-technology/2016/03/to-sql-or-nosql-thats-the-database-question/
Poke around the infrastructure of any startup website or mobile app these days, and you’re bound to find something other than a relational database doing much of the heavy lifting. Take, for example, the Boston-based startup Wanderu. This bus- and train-focused travel deal site launched about three years ago. And fed by a Web-generated glut of unstructured data (bus schedules on PDFs, anyone?), Wanderu is powered by MongoDB, a “NoSQL” database—not by Structured Query Language (SQL) calls against traditional tables and rows.
But why is that? Is the equation really as simple as “Web-focused business = choose NoSQL?” Why do companies like Wanderu choose a NoSQL database? (In this case, it was MongoDB.) Under what circumstances would a SQL database have been a better choice?
Today, the database landscape continues to become increasingly complicated. The usual SQL suspects—SQL Server-Oracle-DB2-Postgres, et al.—aren’t handling this new world on their own, and some say they can’t. But the division between SQL and NoSQL is increasingly fuzzy, especially as database developers integrate the technologies together and add bits of one to the other.
The genesis of NoSQL
In the beginning—about 12 years ago—there was structured data, and it was good. Usually consisting of things like numbers, dates, and groups of words and numbers called strings, structured data could be displayed in titled columns and rows that were easy to order. Financial companies loved it: you could put customers’ names and account balances into rows with titled columns, and you could put the data into tables and do other things with it, like join tables and run queries in a language, SQL, that was pretty close to English.
But that data was live, stored in operational systems like Enterprise Resource Planning (ERP) setups.
SQL-based relational servers are built to handle the demands of financial transactions, designed around the tenets of ACID: Atomicity, Consistency, Isolation, and Durability. These characteristics ensure that only one change can be written to a data field at a time, so there are no conflicting transactions made.
ACID, though, doesn’t matter much when you’re just reading data for analysis. And the database locks that SQL databases use to protect database consistency in transactions can get in the way. The Internet ushered in what VoltDB Director of Product Marketing Dennis Duckworth calls “Web-scale attacks” on databases: as in, up to hundreds or even millions of people wanting access to the same data sources at the same time.
How do you scale an Internet business to handle that? It used to be that you’d buy a bigger server—an HP Superdome, say, or a huge mainframe that could scale up. But that got expensive fast. Businesses turned to buying cheaper, commodity boxes to scale out instead of up, distributing the database out over hundreds or even thousands of servers.
But outside of financial transactions, you don’t always need the most up-to-the-second abilities to write data. “Pretty close” can be good enough, such as when the database is just overwriting old data and it’s OK to get the results a little wrong for a few minutes. Think of Google’s indexing.
That setup is a little sloppy, and it wouldn’t work for financial transactions. But that ability is just fine for developers who need drop-dead-fast results, not pinpoint perfect.
NoSQL databases are often associated with “big data” tasks
Hadoop
Hadoop isn’t even really considered a database, but when you look at databases, you’ll no doubt come across it. It was designed as a cheap way to store data and process it someday, in some way. Currently, it’s huge. Hadoop is everywhere online: Facebook, eBay, Etsy, Yelp, Twitter, Salesforce.
MongoDB
Beyond the not-really-a-database Hadoop, there are an awful lot of actual databases to choose from. DB-Engines ranks 264 of them by popularity. MongoDB is at No. 4 of all databases, including both SQL and NoSQL, and it’s the most popular NoSQL database.
Couchbase
Fun fact: Couch is an acronym for cluster of unreliable commodity hardware. Couchbase Server is an open source, NoSQL, distributed, JSON-based document store.
It’s considered to be a CP type system—meaning it gives consistency and partition tolerance
VoltDB and the return of SQL
In 2008, with SQL-confounding Web data flooding the world and object-oriented programming all the rage as the tool to handle it, SQL was sneered at as if the venerable, powerful query language were a paisley polyester shirt. But SQL isn’t going anywhere—it’s evolving, thanks to advances in data science. By 2013, SQL was experiencing a renaissance, and its power became manifest in NewSQL. This is a reworking of relational database management systems that promised extreme scalability for online transaction processing (OLTP), just like NoSQL systems, and the same valuable guarantees of ACID compliancy that traditional RDBMSes had always promised.
VoltDB is one example—a distributed, in-memory, massively parallel NewSQL relational database that clearly shows that SQL can be made to scream.
Convergence
It’s increasingly apparent that for many, it’s no longer an issue of SQL vs. NoSQL. Instead, it’s SQL and NoSQL, with both having their own clear places—and increasingly being integrated into each other. Microsoft, Oracle, and Teradata, for example, are now all selling some form of Hadoop integration to connect SQL-based analysis to the world of unstructured big data.
Tomi Engdahl says:
Visual Doom AI Competition @ CIG 2016
http://vizdoom.cs.put.edu.pl/competition-cig-2016
Motivation
Doom has been considered one of the most influential titles in the game industry since it popularized the first-person shooter (FPS) genre and pioneered immersive 3D graphics. Even though more than 20 years have passed since Doom’s release, the methods for developing AI bots have not improved significantly in newer FPS productions. In particular, bots have still to “cheat” by accessing game’s internal data such as maps, locations of objects and positions of (player or non-player) characters. In contrast, a human can play FPS games using a computer screen as the only source of information. Can AI effectively play Doom using only raw visual input?
Goal
The participants of the Visual Doom AI competition are supposed to submit a controller (C++, Python, or Java) that plays Doom. The provided software gives a real-time access to the screen buffer as the only information the agent can base its decision on. The winner of the competition will be chosen in a deathmatch tournament.
Can you teach a computer to play Doom deathmatches like a human?
http://thenextweb.com/apps/2016/04/22/researchers-want-to-teach-computers-to-play-doom-like-a-human/
Tomi Engdahl says:
AI Birds.org Angry Birds AI Competition
http://aibirds.org/
Here you will find all the information about upcoming and previous Angry Birds AI Competitions. The task of this competition is to develop a computer program that can successfully play Angry Birds. The long term goal is to build an intelligent Angry Birds playing agent that can play new levels better than the best human players. This is a very difficult problem as it requires agents to predict the outcome of physical actions without having complete knowledge of the world, and then to select a good action out of infinitely many possible actions. This is an essential capability of future AI systems that interact with the physical world. The Angry Birds AI competition provides a simplified and controlled environment for developing and testing these capabilities.
Tomi Engdahl says:
Chromebooks may soon be able to run almost every Android app
http://www.theverge.com/2016/4/24/11500082/chromebooks-android-app-support-coming
It looks like Google may soon be breaking down barriers between its two operating systems and giving Chrome OS users access to Android apps from the Google Play Store. As spotted by a Reddit user this weekend, Chromebooks running version 51 of Chrome OS are showing a checkbox in their settings menu that reads “Enable Android apps to run on your Chromebook.” The option disappears quickly, but the Chrome OS source code appears to indicate that Chromebook users will soon get access to the “more than a million” games and apps on the Google Play Store.
Google first started making Android apps — including Vine, Evernote, and Duolingo — available on Chrome OS back in 2014, as part of a limited trial that was expanded in April the next year with the wider launch of the ARC (App Runtime for Chrome) Welder app.
Tomi Engdahl says:
40% of Silicon Valley’s Profits (But Not Sales) Came from Apple
https://apple.slashdot.org/story/16/04/24/0011248/40-of-silicon-valleys-profits-but-not-sales-came-from-apple
The San Jose Mercury News reports that last year 40% of Silicon Valley’s profits came from one company — Apple. “The iPhone maker accounted for 28 percent of the Bay Area tech industry’s $833 billion in 2015 sales,” while “Its profits were a jaw-dropping 40 percent of the region’s $133 billion total.”
Meanwhile, Google’s parent company Alphabet racked up $75 billion in sales, representing nearly 57% of the total for all Silicon Valley internet companies, followed by eBay and PayPal.
But while sales grew, internet-company profits fell by 29% as more companies focused on growth.
Quinn: After a year of slowing sales, Silicon Valley’s future takes shape
http://www.siliconvalley.com/michelle-quinn/ci_29803200/quinn-after-year-slowing-sales-silicon-valleys-future
It’s a social media world
Alphabet, the parent company of Google, dominates all things Internet with $75 billion in sales in 2015, more than half of the $132 billion in revenue the sector generated.
But peek behind Alphabet, and there are a bunch of social media companies clamoring up the sales ladder fast compared with firms in other sectors.
Facebook, ranked ninth this year, jumped up one slot. It reported nearly $18 billion in sales, up 44 percent from the prior year. (Synnex, an IT supply chain services firm, dropped to the 10th position from the ninth.)
Netflix, once dismissed as a glorified video rental outfit, is now ranked 14th in sales, up five slots from 19th.
And Twitter, despite its struggles to grow its user base, jumped 10 companies to now rank 40th in terms of sales.
Tomi Engdahl says:
There’s really only one dominant company in financial tech
http://uk.businessinsider.com/financial-tech-companies-by-market-cap-2016-4?r=US&IR=T
There’s a lot of hype around the new crop of financial tech companies aiming to disrupt how we save, invest, buy, and move money around.
But the company that kicked off the first financial tech revolution is still around, and it’s much bigger than all the newcomers. PayPal was founded in 1998 and quickly became the dominant way to pay for things on eBay — so much so that eBay bought the company in 2002. It spun back out again as an independent public company last year, and now has a market cap around $47 billion. That makes it worth more than all the other financial tech startups on this chart from Statista, combined.
Then again, if you expand the definition of “financial tech” to include traditional players like banks and credit card companies, PayPal once again looks like a small fish.
Tomi Engdahl says:
When It Comes to Age Bias, Tech Companies Don’t Even Bother to Lie
http://observer.com/2016/04/when-it-comes-to-age-bias-tech-companies-dont-even-bother-to-lie/
Making hiring and firing decisions based on age is illegal, but age discrimination is rampant in the tech industry, and everyone knows it, and everyone seems to accept it.
One excuse for pushing out older workers is that technology changes so fast that older people simply can’t keep up. Veteran coders don’t know the latest programming languages, but young ones do. This is bunk. There’s no reason why a 50-year-old engineer can’t learn a new programming language. And frankly, most coding work isn’t rocket science.
What’s more, most jobs in tech companies don’t actually involve technology. During my time at HubSpot fewer than 100 of the company’s 500 employees were software developers. The vast majority worked in marketing, sales, and customer support. Those jobs don’t require any special degree or extensive training. Anyone, at any age, could do them.
Twenty years ago, when venture capitalists invested in young founders, they usually insisted that founders team up with older, seasoned executives to provide “adult supervision.” Lately the conventional wisdom has been that it’s better to let young founders go it alone. The consequences have been predictably disastrous. Young male founders hire young male employees, and spend huge money building kooky office frat houses.
In the tech industry the practice of bros hiring bros is known as “culture fit,” and it’s presented as a good thing. The problem with “culture fit” is that unless you’re a twenty-something white person, you don’t fit.
Tomi Engdahl says:
Callum Leslie / The Daily Dot:
PwC: eSports industry will make $463M in revenue in 2016, up 43% from 2015 — Esports will earn a half billion in revenue this year according to PwC report — Esports will earn $500 million in revenue this year, according to a new report by PwC, the latest major financial services firm …
Esports will earn a half billion in revenue this year according to PwC report
http://www.dailydot.com/esports/pwc-predict-massive-year-esports/
Esports will earn $500 million in revenue this year, according to a new report by PwC, the latest major financial services firm to report on the potential of the esports industry.
Predicting the future of the esports industry has become something of a cottage industry all on its own. PwC’s report follows similar research by fellow market leaders Deloitte and smaller firms like Newzoo.
According to PwC, better known as PricewaterhouseCoopers, esports is set to bring in $463 million in revenue this year, a 43 percent increase on 2015. That’s slightly lower than the $500 million estimate from Deloitte back in January.
PwC also conducted a major survey of the esports audience, with surprising results.
According to PwC, 57 percent of the esports audience describe themselves as “hardcore gamers” and over a third of them fall into the coveted 18-24 demographic. But PwC found that women were more likely to describe themselves as involved in esports than men—22 percent of them, compared to 18 percent of men surveyed.
Tomi Engdahl says:
Computers That Crush Humans at Games Might Have Met Their Match: ‘StarCraft’
Artificial intelligence has conquered complex games, but to win this one, machines need to figure out how to lie
Computers That Crush Humans at Games Might Have Met Their Match: ‘StarCraft’
Artificial intelligence has conquered complex games, but to win this one, machines need to figure out how to lie
Humanity has fallen to artificial intelligence in checkers, chess, and, last month, Go, the complex ancient Chinese board game.
But some of the world’s biggest nerds are confident that machines will meet their Waterloo on the pixelated battlefields of the computer strategy game StarCraft.
A key reason: Unlike machines, humans are good at lying.
StarCraft, created in 1998, is one of the world’s most popular computer game franchises.
Tomi Engdahl says:
Microsoft, Google bury hatchet – surprisingly, not in each other
Both vow to stop running to mommy and daddy to tittle-tattle on each other
http://www.theregister.co.uk/2016/04/22/microsoft_google_bury_hatchet/
Microsoft and Google have agreed to sort their issues out between themselves rather than getting state regulators to investigate each other’s actions.
“Microsoft has agreed to withdraw its regulatory complaints against Google, reflecting our changing legal priorities,” Redmond said in a statement to El Reg. “We will continue to focus on competing vigorously for business and for customers.”
For years, the two companies have been using proxy forces, and direct attacks, to encourage government regulators to investigate each other. Google has been questioning the amount of IP property Microsoft holds, while Microsoft has been particularly active in Europe getting the EU to investigate the Chocolate Factory’s search and Android operations.
“Our companies compete vigorously, but we want to do so on the merits of our products, not in legal proceedings,” Google told The Reg in a statement.
Tomi Engdahl says:
Flexi-Plexistor’s software-defined memory roadmap
Storage-class memory emerging in software
http://www.theregister.co.uk/2016/04/25/flexiplexistor_softwaredefined_memory_roadmap/
Startup Plexistor’s SDM software is said to run any application at near-memory speed by using caching and tiering. It has a file system that covers DRAM, NVDIMM-N (byte-addressable flash DIMMs fully mapped to memory space and accessed at cache-line granularity), NVDIMM-F (block-addressable flash DIMM on memory bus), forthcoming XPoint, and SSDs.
The open source software runs on Linux (Red Hat, CentOS and Ubuntu) and converges memory, meaning DRAM or NVDIMM-N, and flash storage according to Plexistor, providing a single, unified memory address space. DRAM and NVDIMM-N provide a tier 1 with SSD/PCI flash providing a tier 2, that is still seen as memory by applications like MongoDB, Cassandra and Couchbase.
The second tier is limited to 12.5 times the capacity of the first tier. In tier 2 Plexistor says, “NVMe devices are preferred, but aggregating several SSDs via Linux LVM or using an AFA LUN are also valid options.”
Tomi Engdahl says:
Browser suffers from JavaScript-creep disease
http://www.edn.com/electronics-blogs/brians-brain/4441879/Browser-suffers-from-JavaScript-creep-disease?_mc=NL_EDN_EDT_EDN_today_20160425&cid=NL_EDN_EDT_EDN_today_20160425&elqTrackId=706b921cc45449d8b3e527e4909e9fe0&elq=a6c0b851ea9b40bf83ec6db4866847bb&elqaid=31982&elqat=1&elqCampaignId=27899
1 Comments
inShare1
Save Follow
PRINT
PDF
EMAIL
As time has gone on, my browsing experience on Firefox has gotten slower and slower, even though my broadband connection has gotten faster and faster. Rightly or wrongly, the browser has developed something of a bloatware reputation, due both to evolution of the foundation software package and its plethora of extensions (whose availability is ironically at the core of why it’s my preferred browser in the first place).
In attempting to deal with the issue, I first trimmed down the number of extensions I had enabled to the bare-bones minimum, with little to no noticeable effect, then gritted my teeth and vowed to stick it out. But the situation recently reached the realm of the ridiculous; sites like Amazon, Ebay, the Weather Channel, and Wired would slow my system to a crawl, as would more than one or two simultaneous tabs’ worth of comics published at Arcamax, GoComics, and elsewhere (I … umm … scan 26 online comics every morning …).
So I decided to research the situation further, beginning with a specific investigation of slowdowns involving Amazon’s website. The culprit, as it turned out, was JavaScript, which Wikipedia claims is “one of the three essential technologies of World Wide Web content production,” along with HTML and CSS. Installing a blacklist extension called YesScript and blocking scripts sourced from the images-amazon.com domain provided at least some relief (at the tradeoff of some reduced functionality). But this measure only assisted with one particular website; plenty of other domains I regularly visited were also experiencing slowdowns
A sledgehammer, versus a scalpel, was what I decided I needed. I found my tool in the well-known NoScript extension, recommended by (among others) Edward Snowden. NoScript’s primary intention is to bolster user security; as such, it allows some trusted sources’ JavaScript, Java, Flash, and other applets to run by default.
Newspapers, as I recently noted, are increasingly desperate for revenue from anywhere. No surprise, therefore, that the Denver Post serves up 113 scripts by default
Part of the problem, in Web developers’ slim defense, seems to be with Firefox’s SpiderMonkey JavaScript engine; I don’t notice the same CPU loading when I load a script-burdened page in Google’s Chrome (V8), for example, or Apple’s own Safari (JavaScriptCore, aka Nitro). But the bulk of the problem involves yet another manifestation of the “Tragedy of the Commons” phenomenon that I’ve used before to describe, for example, wireless communications network overloads. Quoting Wikipedia, it’s:
A situation where individuals acting independently and rationally according to each other’s self-interest behave contrary to the best interests of the whole group by depleting some common resource.
Typically, that resource is presumed to be plentiful, low-to-no cost, and nearly-to-completely unregulated. In this particular case, it’s the CPU (along with, to some extent, the GPU). Each JavaScript instance presumes it has exclusive access to as much of the processor’s horsepower as it needs, ignoring the reality of the concurrent presence of other contenting scripts. And each Web developer presumes that its site has exclusive access to the browser, ignoring the reality of the concurrent presence of other contending pages loaded in other browser tabs and windows (not to mention the concurrent presence of other contending applications besides the browser).
Is it any wonder that ad blockers and their ilk have become so popular of late? Unfortunately, NoScript and other brute-force JavaScript-disable schemes aren’t palatable for the masses; while my experience indicates that they’re highly effective, they too-severely “break” websites in the process.
YesScript
https://addons.mozilla.org/en-US/firefox/addon/yesscript/
YesScript lets you make a blacklist of sites that aren’t allowed to run JavaScript. Use YesScript on sites that annoy you or hog your system resources. One click to the icon in the status bar turns scripts on or off for the current site.
Unlike NoScript, YesScript does absolutely nothing to improve your security.
NoScript
https://noscript.net/
Tomi Engdahl says:
The Death of RoboVM
http://www.linuxjournal.com/content/death-robovm
Microsoft recently made a big noise about its love and support of the Open Source community (especially Linux), but while it’s making concrete steps toward improving its support for FOSS projects, its motives may not be entirely altruistic. Microsoft continues to fund legal attacks against open-source projects on multiple fronts, and it has crushed open-source projects when it suits the company.
Such is the case with RoboVM, a Java-to-mobile compiler that supported cross-platform mobile development.
RoboVM originally was an open-source project, although that changed after the parent company was acquired by Xamarin in October 2015. Xamarin had several similar products that support cross-platform development using different programming languages. Naturally, Xamarin saw RoboVM as a suitable addition to its stable.
Shortly after the acquisition, an announcement was made to the effect that the open-source development model “wasn’t working out” for the RoboVM team.
Last week, the RoboVM team announced that the project would be shut down.
This is bad news for projects that depend on RoboVM.
Although several other open-source projects are in development that bring Java apps to iOS, they still are far from being production-ready.
So, why did Microsoft decide to axe a useful tool with a thriving user base? The official line is that the company decided it didn’t fit in with its vision for mobile development. The existence of several similar open-source projects may have been a factor—why invest in building a platform with strong competition on the horizon?
But, there are some who will say that Microsoft just doesn’t like Java.
Tomi Engdahl says:
Intel helps Redmond ingest Objective-C code
Accelerate framework added to Windows Bridge for iOS
http://www.theregister.co.uk/2016/04/26/intel_helps_redmond_ingest_objective_c_code/
Intel has dropped a slab of code into Microsoft’s Windows Bridge for iOS project, starting with APIs for vector maths, matrix maths, digital signal processing (DSP) and image processing.
According to an announcement posted at Microsoft, Intel wants to make sure that developers working in Objective-C can run their code on Intel-based Windows 10 devices as easily as possible.
Still described as a “preview” at Github, the Windows Bridge for iOS’s first release was nine months ago. The Microsoft open source project is an Objective-C development environment for Visual Studio with support for iOS APIs.
Intel’s first contribution to the project, the Accelerate framework (here for Apple developers), targets scientific computing requirements, including audio and image filters.
Tomi Engdahl says:
Charlie Warzel / BuzzFeed:
Unicode Consortium internal emails reveal emerging tension over resources going toward development of emoji instead of supporting obscure or minority languages
Inside “Emojigeddon”: The Fight Over The Future Of The Unicode Consortium
http://www.buzzfeed.com/charliewarzel/inside-emojigeddon-the-fight-over-the-future-of-the-unicode#.ujOPL9jxk
Internal emails offer a peek behind the scenes of the peculiar and little-known organization that oversees the development of a weird, new universal language
There’s trouble afoot inside the Emoji Council of Elders, or, at the very least, signs of a low-simmering schism that’s being referred to by some of its participants — perhaps with less humor than one might expect — as “Emojigeddon.”
Emails seen by BuzzFeed News reveal an emerging tension at the Unicode Consortium — the 24-year-old organization that was established to develop standards for translating alphabets into code that can be read across all computers and operating systems.
“And yes, obviously a burrito emoji will be more in use than medieval punctuation.”
Tomi Engdahl says:
Mark Hachman / PCWorld:
Intel CEO outlines new strategy: focus on connected things like PC and IoT, cloud, new memory business like 3D XPoint, 5G, manufacturing and fab innovation
http://www.pcworld.com/article/3061210/components/intel-declares-independence-from-the-pc-as-it-lays-out-a-broader-5-point-strategy.html
Intel declares independence from the PC as it lays out a broader 5-point strategy
The PC’s just another ‘connected thing’ in this new world order.
In what only can be called a manifesto of Intel’s new values, Krzanich described how Intel is transforming itself “from a PC company to a company that powers the cloud and billions of smart, connected computing devices.” To drive the point home, Krzanich noted that the PC is just one among many connected devices.
What might be called the “new” Intel will be built upon five pillars, Krzanich said:
The cloud—including servers, data centers, and virtualization
Connected “things,” such as sensors, autonomous vehicles, or PCs
An evolving memory business, from 3D XPoint memory to advances in server and data center infrastructure
Connectivity, specifically 5G networking
Manufacturing and the underlying fab technology.
About 40 percent of Intel’s revenue and 60 percent of its profit margin already come from outside the PC, Krzanich said last week, when the company began publicly signalling its new focus.
Tomi Engdahl says:
Brian Krzanich: Our Strategy and The Future of Intel
https://newsroom.intel.com/editorials/brian-krzanich-our-strategy-and-the-future-of-intel/
There are five core beliefs that I hold to be undeniably true for the future.
The cloud is the most important trend shaping the future of the smart, connected world – and thus Intel’s future.
The many “things” that make up the PC Client business and the Internet of Things are made much more valuable by their connection to the cloud.
Memory and programmable solutions such as FPGAs will deliver entirely new classes of products for the data center and the Internet of Things.
5G will become the key technology for access to the cloud and as we move toward an always-connected world.
Moore’s Law will continue to progress and Intel will continue to lead in delivering its true economic impact.
Tomi Engdahl says:
Jay Donovan / TechCrunch:
Movidius announces Fathom Neural Compute Stick, a neural network compute accelerator on a USB stick
Plug the Fathom Neural Compute Stick into any USB device to make it smarter
http://techcrunch.com/2016/04/28/plug-the-fathom-neural-compute-stick-into-any-usb-device-to-make-it-smarter/
Following on the heels of their announcement a few weeks ago about their FLIR partnership, Movidius is making another pretty significant announcement regarding their Myriad 2 processor. They’ve incorporated it into a new USB device called the Fathom Neural Compute Stick.
You can plug the Fathom into any USB-capable device (computer, camera, GoPro, Raspberry Pi, Arduino, etc) and that device can become “smarter” in the sense that it can utilize the Myriad 2 processor inside of it to become an input for a neural network (I’ll come back to all this).
Essentially, it means a device with the Fathom plugged into it can react cognitively or intelligently, based on the things it sees with its camera (via computer vision) or data it processes from another source. A device using it can make its own decisions depending on its programming. The key point is it can do this all natively—right on the stick. No call to the cloud is necessary.
In addition to the stick, Movidius has also created a software system they are calling the Fathom Deep Learning Software Framework that lets you optimize and compile learning algorithms into a binary that will run on the Myriad 2 at extremely low power. In a computer vision scenario, Movidius claims it can process 16 images per second using a single watt of power at full-bore/peak performance.
A larger rollout is planned for Q4 that is targeting the sub $100 range for the device.
The Complicated Part: What’s All This Business About Neural Networks And Algorithms?
Say you want to teach a computer system to recognize images or parts of images and react to them very quickly. For example, you want to program a drone camera to be able to recognize landing surfaces that are flat and solid versus those that are unstable.
To do this, you might build a computer system, with many, many GPUs and then use an open source software library like TensorFlow on that system to make the computer a learning system—an Artificial Neural Network. Once you have this system in place, you might begin feeding tens or even hundreds of thousands of images of acceptable landing surfaces into that learning system: flat surfaces, ship decks, driveways, mountaintops…anywhere a drone might need to land.
Over time, this large computer system begins learning and creating an algorithm to where it can begin to anticipate answers on it own, very quickly.
How the Fathom Neural Compute Stick figures into this is that the algorithmic computing power of the learning system can be optimized and output (using the Fathom software framework) into a binary that can run on the Fathom stick itself.
Tomi Engdahl says:
Dave Gershgorn / Popular Science:
Elon Musk and Sam Altman’s OpenAI releases Gym, a toolkit for developers to test reinforcement learning algorithms — Elon Musk’s Artificial Intelligence Group Opens A ‘Gym’ To Train A.I. — In any scientific arena, good research is able to be replicated.
Elon Musk’s Artificial Intelligence Group Opens A ‘Gym’ To Train A.I.
http://www.popsci.com.au/robots/artificial-intelligence/elon-musks-artificial-intelligence-group-opens-a-gym-to-train-ai,418717
Artificial Intelligence //
In any scientific arena, good research is able to be replicated. If others can mimic your experiment and get the same results, that bodes well for the validity of the finding. And if others can tweak your study to get better results, that’s of even more benefit to the community.
Elon Musk’s Artificial Intelligence Group Opens A ‘Gym’ To Train A.I.
http://www.popsci.com.au/robots/artificial-intelligence
These ideas are the driving force behind OpenAI Gym, a new platform for artificial intelligence research. OpenAI, announced earlier this year, is the brainchild of Elon Musk, Y Combinator’s Sam Altman, and former Googler Ilya Sutskever. The collaboration vows to undertake ambitious artificial intelligence (A.I.) research while publishing and open-sourcing nearly everything they do. The platform wants to be the standard for benchmarking certain kinds of A.I. algorithms, and a place for people to share their results.
OpenAI Gym Beta
https://gym.openai.com/?dom=pscau&src=syn
We provide the environment; you provide the algorithm.
You can write your agent using your existing numerical computation library, such as TensorFlow or Theano.
Open source interface to reinforcement learning tasks.
The gym open-source project provides a simple interface to a growing collection of reinforcement learning tasks. You can use it from Python, and soon from other languages.
Tomi Engdahl says:
Getting started with DevOps: A guide for IT managers
Puppet white paper distils transformation insights and tips
http://www.theregister.co.uk/2016/04/29/getting_started_with_devops_a_guide_for_it_managers/
The paper, Getting Started with DevOps: A Guide for IT Managers, is based on the experiences of Gareth Rushgrove, senior software engineer at Puppet and former technical architect for UK’s Government Digital Service, responsible for building GOV.UK.
The paper explores approaches an IT manager should take to make a DevOps initiative successful. DevOps practices will serve your team well, says Gareth, if you want the ability to deliver better software faster, scale quickly and easily, avoid the pitfalls of shadow IT, and enjoy better relationships with colleagues across the organisation.
Importantly, DevOps removes technical and functional roadblocks between Dev and Ops, encouraging new modes of thinking and more collaborative practices
Tomi Engdahl says:
Gamer will never win with touch
True Players do not touch the touch screens, but these screens playing is clumsy and slow. Now Aalto University researchers have discovered why this is so. Johtopäätöskin is clear: the traditional physical buttons on the touch screen will never win.
Traditionally, the cause of touch screens clumsiness is considered the lack of a physical keyboard, but the Aalto University research group, a new theory suggests that the culprit can mask else.
- The lack of a traditional keypad, it is not a critical thing, because the touch-screen gives the user feedback through the sense of touch. Touch screens are also no longer a traditional slower, explains the Aalto University researcher Lee Byungjoo.
on the basis of the experiments scientists develop the theory that the touch screen response to the difficulty, there are three reasons for this. The first is that keeping your fingers with a suitable, uniform distance of the display is difficult, which makes rapid and correct response more difficult than a traditional keyboard, with the fingers resting on the right places.
Another challenge is that the human nervous system is difficult to predict when the device is registered with a finger touch. The third reason is the gaming complicating the variation in time that it takes the application to push the processing.
User scheduling is most accurate when the touch is registered finger contact area is at its greatest.
- We can finally explain why such games to play on the touch screen is so frustrating. Our model is able to predict how many points a player on the touch screen can be achieved.
Source: http://etn.fi/index.php?option=com_content&view=article&id=4333:pelikaytossa-kosketus-ei-koskaan-voita&catid=13&Itemid=101
More:
Modelling Error Rates in Temporal Pointing
http://users.comnet.aalto.fi/oulasvir/pubs/modeling-errors-chi2016.pdf
Tomi Engdahl says:
Google CEO Predicts AI-Fueled Future
https://tech.slashdot.org/story/16/04/28/1759211/google-ceo-predicts-ai-fueled-future
Google CEO Sundar Pichai says the next big evolution for technology is AI. “Looking to the future, the next big step will be for the very concept of the ‘device’ to fade away,” Pichai wrote in Google’s annual founders’ letter.
https://googleblog.blogspot.fi/2016/04/this-years-founders-letter.html