Computer trends for 2014

Here is my collection of trends and predictions for year 2014:

It seems that PC market is not recovering in 2014. IDC is forecasting that the technology channel will buy in around 34 million fewer PCs this year than last. It seem that things aren’t going to improve any time soon (down, down, down until 2017?). There will be no let-up on any front, with desktops and portables predicted to decline in both the mature and emerging markets. Perhaps the chief concern for future PC demand is a lack of reasons to replace an older system: PC usage has not moved significantly beyond consumption and productivity tasks to differentiate PCs from other devices. As a result, PC lifespan continue to increase. Death of the Desktop article says that sadly for the traditional desktop, this is only a matter of time before its purpose expires and that it would be inevitable it will happen within this decade. (I expect that it will not completely disappear).

When the PC business is slowly decreasing, smartphone and table business will increase quickly. Some time in the next six months, the number of smartphones on earth will pass the number of PCs. This shouldn’t really surprise anyone: the mobile business is much bigger than the computer industry. There are now perhaps 3.5-4 billion mobile phones, replaced every two years, versus 1.7-1.8 billion PCs replaced every 5 years. Smartphones broke down that wall between those industries few years ago – suddenly tech companies could sell to an industry with $1.2 trillion annual revenue. Now you can sell more phones in a quarter than the PC industry sells in a year.

After some years we will end up with somewhere over 3bn smartphones in use on earth, almost double the number of PCs. There are perhaps 900m consumer PCs on earth, and maybe 800m corporate PCs. The consumer PCs are mostly shared and the corporate PCs locked down, and neither are really mobile. Those 3 billion smartphones will all be personal, and all mobile. Mobile browsing is set to overtake traditional desktop browsing in 2015. The smartphone revolution is changing how consumers use the Internet. This will influence web design.

crystalball

The only PC sector that seems to have some growth is server side. Microservers & Cloud Computing to Drive Server Growth article says that increased demand for cloud computing and high-density microserver systems has brought the server market back from a state of decline. We’re seeing fairly significant change in the server market. According to the 2014 IC Market Drivers report, server unit shipment growth will increase in the next several years, thanks to purchases of new, cheaper microservers. The total server IC market is projected to rise by 3% in 2014 to $14.4 billion: multicore MPU segment for microservers and NAND flash memories for solid state drives are expected to see better numbers.

Spinning rust and tape are DEAD. The future’s flash, cache and cloud article tells that the flash is the tier for primary data; the stuff christened tier 0. Data that needs to be written out to a slower response store goes across a local network link to a cloud storage gateway and that holds the tier 1 nearline data in its cache. Never mind software-defined HYPE, 2014 will be the year of storage FRANKENPLIANCES article tells that more hype around Software-Defined-Everything will keep the marketeers and the marchitecture specialists well employed for the next twelve months but don’t expect anything radical. The only innovation is going to be around pricing and consumption models as vendors try to maintain margins. FCoE will continue to be a side-show and FC, like tape, will soldier on happily. NAS will continue to eat away at the block storage market and perhaps 2014 will be the year that object storage finally takes off.

IT managers are increasingly replacing servers with SaaS article says that cloud providers take on a bigger share of the servers as overall market starts declining. An in-house system is no longer the default for many companies. IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding. IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers. In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers. The shift will slow the purchase of server sales to enterprise IT. Big cloud providers are more and more using their own designs instead of servers from big manufacturers. Data center consolidations are eliminating servers as well. For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining.

I hope that the IT business will start to grow this year as predicted. Information technology spends to increase next financial year according to N Chandrasekaran, chief executive and managing director of Tata Consultancy Services (TCS), India’s largest information technology (IT) services company. IDC predicts that IT consumption will increase next year to 5 per cent worldwide to $ 2.14 trillion. It is expected that the biggest opportunity will lie in the digital space: social, mobility, cloud and analytics. The gradual recovery of the economy in Europe will restore faith in business. Companies are re-imaging their business, keeping in mind changing digital trends.

The death of Windows XP will be on the new many times on the spring. There will be companies try to cash in with death of Windows XP: Microsoft’s plan for Windows XP support to end next spring, has received IT services providers as well as competitors to invest in their own services marketing. HP is peddling their customers Connected Backup 8.8 service to prevent data loss during migration. VMware is selling cloud desktop service. Google is wooing users to switch to ChromeOS system by making Chrome’s user interface familiar to wider audiences. The most effective way XP exploiting is the European defense giant EADS subsidiary of Arkoon, which promises support for XP users who do not want to or can not upgrade their systems.

There will be talk on what will be coming from Microsoft next year. Microsoft is reportedly planning to launch a series of updates in 2015 that could see major revisions for the Windows, Xbox, and Windows RT platforms. Microsoft’s wave of spring 2015 updates to its various Windows-based platforms has a codename: Threshold. If all goes according to early plans, Threshold will include updates to all three OS platforms (Xbox One, Windows and Windows Phone).

crystalball

Amateur programmers are becoming increasingly more prevalent in the IT landscape. A new IDC study has found that of the 18.5 million software developers in the world, about 7.5 million (roughly 40 percent) are “hobbyist developers,” which is what IDC calls people who write code even though it is not their primary occupation. The boom in hobbyist programmers should cheer computer literacy advocates.IDC estimates there are almost 29 million ICT-skilled workers in the world as we enter 2014, including 11 million professional developers.

The Challenge of Cross-language Interoperability will be more and more talked. Interfacing between languages will be increasingly important. You can no longer expect a nontrivial application to be written in a single language. With software becoming ever more complex and hardware less homogeneous, the likelihood of a single language being the correct tool for an entire program is lower than ever. The trend toward increased complexity in software shows no sign of abating, and modern hardware creates new challenges. Now, mobile phones are starting to appear with eight cores with the same ISA (instruction set architecture) but different speeds, some other streaming processors optimized for different workloads (DSPs, GPUs), and other specialized cores.

Just another new USB connector type will be pushed to market. Lightning strikes USB bosses: Next-gen ‘type C’ jacks will be reversible article tells that USB is to get a new, smaller connector that, like Apple’s proprietary Lightning jack, will be reversible. Designed to support both USB 3.1 and USB 2.0, the new connector, dubbed “Type C”, will be the same size as an existing micro USB 2.0 plug.

2,130 Comments

  1. Tomi Engdahl says:

    Nokia N1: 8-inch Android Lollipop Tablet with iPad-Like Design and 64-bit Intel Processor
    http://techpp.com/2014/11/18/nokia-n1-tablet/

    We were telling you yesterday that a mysterious black box that Nokia has tweeted about. The official launch event has not yet started, but it seems that Nokia has already spilled the beans on its website! And, guess what – Nokia isn’t allowed to make phones in the near future, so they decided to introduce a tablet! And that too an Android tablet running on the latest Android 5.0 Lollipop!

    Nokia’s N1 tablet is powered by Android 5.0 Lollipop but also comes with Nokia Z Launcher, a home screen ‘that makes things simple’.

    the tech specs of the device, as these have been made public, as well:

    Display – 7.9 inch (4:3), 2048×1536 resolution, Gorilla glass 3 IPS panel with LED backlight and Fully laminated zero air-gap display
    Processor – Intel 64-bit Atom Processor Z3580, 2.3 GHz
    Memory and storage – 2 GB LPDDR3 at 800 MHz and 32 GB eMMC 5.0 internal storage
    Camera: 8 MP rear-facing camera with autofocus, 5 MP front-facing camera, fixed focus 1080p video recording

    It’s a really surprising move from the company, especially considering the fact that this tablet is strikingly similar to the iPad.

    Nokia N1 is priced at $249 and will be made available in China to begin with.
    This is an extremely competitive pricing from Nokia considering the specs alone.

    Reply
  2. Tomi Engdahl says:

    Top 500 Supercomputers Sputter
    Growth in performance at historic lows
    http://www.eetimes.com/document.asp?doc_id=1324642&

    The latest list of the world’s top supercomputers shows a trend to slowing growth and little change — at least for the moment. The systems continue their adoption of accelerators to bolster parallelism and are starting to transition to 10 Gbit/s Ethernet as an interconnect.

    Intel continues to dominate the Top 500 with an 85.8% share. IBM’s Power chip comes in a distant second with 8%, and AMD is third at 5.2%. Ethernet continues to be the favorite interconnect for the clusters, with 187 systems using Gigabit and now 88 using 10G Ethernet links. InfiniBand is used in 225 systems, up from 221 six months ago.

    Reply
  3. Tomi Engdahl says:

    Nokia’s first device after Microsoft is an iPad mini clone that runs Android
    This is the Nokia N1
    http://www.theverge.com/2014/11/18/7239709/nokia-n1-tablet-price-release-date

    Nokia is back in the devices business just under seven months after selling its devices and services unit to Microsoft for $7.2 billion. Nokia is unveiling its N1 Android tablet today, days after revealing its plans to license its brand name and teasing a black box on Twitter. Just like Xiaomi’s attempts to emulate Apple’s iPad mini design, Nokia’s N1 has the same 7.9-inch screen size and even the same 2048 x 1536 resolution. Nokia has even opted for a single piece of anodized aluminum design. The resemblances don’t stop there, though.

    Nokia’s N1 is almost identical to the rear of the iPad mini thanks to careful placement of the camera, buttons, and headphone jack. Even the bottom of the device has the same speaker grills and what looks like a Lightning port, but is actually one of the first implementations of the reversible type-C USB connector.

    Nokia’s return to hardware begins with the $250 N1 Android tablet
    http://www.engadget.com/2014/11/18/nokia-n1-tablet-249/

    Rumors of Nokia’s demise have been greatly exaggerated. Its lineup might seem empty now that it’s relinquished control of its Lumia smartphones to a lumbering giant and gave up on those low-cost Asha devices earlier this year, but that doesn’t mean the company’s done crafting consumer gadgets just yet. Now Nokia’s trying to revive its once-titanic consumer brand, starting with something a little… unorthodox. Meet the Nokia N1: a 7.9-inch Android tablet running some Nokia software that looks like a giant iPhone. It’ll cost you $250 when it launches, but it’s slated to land in China first in time for Chinese New Year (that’s February 19, 2015) with a release in Russia to follow soon after.

    The last time Nokia ventured into tablet territory it churned out the Lumia 2520, a respectable (if unrepentantly plasticky) slate as far as Windows RT devices went. With the WiFI-only N1, Nokia ran clear in the opposite direction — its rounded chassis is milled from a single block of aluminum, and is highly, highly reminiscent of an iPad Mini.

    The N1 runs Android 5.0 out of the box, but Nokia has painted over Lollipop with its own homebrew Z Launcher — a little project the company has grown strangely fond of.

    Reply
  4. Tomi Engdahl says:

    Non-Microsoft Nokia launches Android N1 tablet with Foxconn
    https://gigaom.com/2014/11/18/non-microsoft-nokia-launches-android-n1-tablet-and-z-launcher/

    Nokia — not the handset business that Microsoft bought and renamed Microsoft Mobile, but the remaining Finnish firm — has made a shock announcement. It’s launched an Android tablet called the N1.

    This will be confusing, because Microsoft is also selling a Nokia-branded Windows tablet, the Nokia Lumia 2520. It’s a shock, because while Nokia has recently made rumblings about its brand returning to the consumer market, it seemed to indicate that this would merely be a matter of licensing the brand to others.

    Reply
  5. Tomi Engdahl says:

    Microsoft is considering open sourcing parts of Bing backend tech
    http://www.neowin.net/news/microsoft-is-considering-open-sourcing-parts-of-bing-backend-tech

    ‘Open source’ and ‘Microsoft’ used to be like oil and water; the two would never be mixed. But today, Microsoft has shown that it is highly supportive of this community by open sourcing parts of its key products and allowing these platforms to run on its Azure infrastructure.

    Last week, Microsoft announced that they would be open sourcing .NET and allowing it to run on OS X and Linux as well – a strategic and widely welcomed move by the Redmond based company. But what’s next for the company’s open source plans? Well, the current conversations are talking about Bing and seeing what parts of that system could benefit by being open sourced.

    While it seems crazy to open up your search technology, if you think about it, Microsoft doesn’t really have much to lose.”Microsoft doesn’t really have much to lose”

    From Microsoft’s position, Bing search is not a core revenue driver like Windows or Office, it’s a supplementary product that ties many platforms together much like OneDrive. Yes, the engine technically does make money now, but it’s not in the same league as many other revenue drivers for the company.

    Reply
  6. Tomi Engdahl says:

    Virtualization management is becoming a key issue in the journey to a more
    cost-effective, efficient and agile infrastructure. The focus is now
    turning away from how much you invest, to how intelligently you make
    those investments.

    Reply
  7. Tomi Engdahl says:

    The Role of QA
    http://www.eetimes.com/author.asp?section_id=182&doc_id=1324654&

    Delivering high quality doesn’t mean leaning on QA to find your errors. Quality assurance is not supposed to find bugs.

    However, almost every software group conflates QA and QC, generally folding both operations into the single term QA. I have no reason to tilt against windmills so will use the term “QA.” Software engineering isn’t manufacturing; we don’t need to adopt their nomenclature.

    I believe that developers have the responsibility to deliver extremely high-quality code. Tiny teams may deliver directly to the customer, while larger groups have a separate QA operation. Regardless, we engineers must create the very best work products.

    QA’s role is to demonstrate the absence of defects. Sure, life is tough and software complex. Sometimes they will find problems. But that doesn’t absolve the engineers of their responsibility to strive for perfection. We engineers must take pride in our work, demonstrate exceptional craftsmanship, and constantly hone our tools, processes, and techniques to achieve the highest quality

    Reply
  8. Tomi Engdahl says:

    Nvidia doubles Tesla grunt at SC14
    2.9 teraflops, 4,992 cores: here comes the K80
    http://www.theregister.co.uk/2014/11/18/nvidia_doubles_tesla_grunt_at_sc14/

    Nvidia’s SC14 eye-catcher is the next increment in its HPC GPU accelerator, the Tesla K80.

    The successor to the Tesla K40, the K80 is pitched as double-your-everything: twice the performance and twice the memory bandwidth. Unsurprisingly, the company reckons its target will be data analytics and scientific computing applications (which apart from Bitcoin mining are the plum targets for GPU-accelerated HPC).

    The K80 claims 8.74 teraflops single-precision and 2.91 teraflops double-precision. Each board has two GPUs, 24 GB of GDDR5 memory (12 GB for each GPU), and 480 gigabytes/second memory bandwidth. There are 4,992 CUDA cores, and NVIDIA’s GPU booster (overclocking for boffins) and dynamic parallelism support.

    Nvidia is one of the companies to pitch its wares into the Department of Energy’s 2017-scheduled Summit and Sierra monsters, which will scorch along at 100 petaflops and as much as 300 petaflops respectively.

    Reply
  9. Tomi Engdahl says:

    Machine-Learning Algorithm Ranks the World’s Most Notable Authors
    http://www.technologyreview.com/view/532591/machine-learning-algorithm-ranks-the-worlds-most-notable-authors/

    Deciding which books to digitise when they enter the public domain is tricky; unless you have an independent ranking of the most notable authors.

    Reply
  10. Tomi Engdahl says:

    Scale out sister: Open sorcerer pulls v3 Gluster cluster out of Red Hat
    Linux pusher intros Storage Server 3
    http://www.theregister.co.uk/2014/10/03/red_hats_roseate_gluster_gig/

    Open sorcery evangelist Red Hat has updated its Gluster clustering Red Hat Storage Server to v3, adding capacity and cluster nodes.

    RHSS v 3.0 is based on the GlusterFS 3.6 file system with RHEL 6.0. It is designed for scale-out file storage and is built from, of course, open source software.

    A datasheet (PDF) says: “Red Hat Storage Server can easily be deployed on-premise, in public cloud infrastructures, and in hybrid cloud environments. It is optimised for storage-intensive enterprise workloads such as archiving and backup, rich media content delivery, enterprise drop-box, cloud and business applications, virtual and cloud infrastructure storage, as well as emerging workloads such as co-resident applications and big data Hadoop workloads.”

    Reply
  11. Tomi Engdahl says:

    Multi-petabyte open sorcery: Spell-binding storage
    Not just for academics anymore
    http://www.theregister.co.uk/2014/11/18/multipetabyte_open_sorcery/

    Mixing petabytes of data and open-source storage used to be the realm of cash-strapped academic boffins who didn’t mind mucking in with software wizardry.

    The need to analyse millions, billions even, of records of business events stored as unstructured information in multi-petabyte class arrays makes ordinary storage seem like child’s play. It also dramatically increases storage costs, so much so that the software needed to manage and access the data becomes hugely expensive too.

    This is a whole new ball game so you need to be clear about what technologies you can use, from blindingly fast integrated hardware/software systems at one end of the spectrum to cheap JBODs (just a bunch of disks) and free software at the other.

    At first glance the options we have are object storage, with compute (controller) resources per node, a clustered storage array setup like NetApp’s Clustered ONTAP, or a modified file system that can provide multiple access lanes.

    Six basic alternatives come to mind:

    Parallel NFS
    IBM Elastic Storage
    DDN and Web Object Scaler (WOS)
    Seagate/Xyratex and Lustre
    Red Hat and Gluster
    Ceph

    DIY or a contracted alternative

    Ceph, Lustre and Gluster all free you from software lock-in and let you move off a hardware platform, since they use commodity components such as X86 servers, but still provide the one-throat-to-choke full service support that will suit customers not wishing or able to do everything themselves.

    Lustre and Gluster have their file access strengths plus the object back-end advantages of being scalable and self-healing. Ceph has those advantages too plus scale to the exabyte level, and it provides file, block and object access.

    To get the benefits of open source you have to bind yourself to a single storage software choice. After that you can decide on the full do-it-all-yourself approach or pay a supplier to hold your hand and provide the same kind of deployment and support as you would get with proprietary hardware and software such as IBM’s Elastic Storage or DDN’s WOS.

    Reply
  12. Tomi Engdahl says:

    Brocade notes that enterprises today face significant challenges from the explosive growth of data and application workload traffic driven by virtualization, which is putting considerable pressure on IT to keep data highly available. According to IDC, disaster recovery requirements have become increasingly stringent for their mission-critical applications, with 84 percent of enterprises having RPOs of less than an hour and 78 percent having RTOs of less than four hours.

    “With stored data expected to grow at a 44 percent CAGR over the next five years, enterprises are demanding that their recovery solutions meet ever higher standards in keeping data available amid a wide variety of increasingly frequent natural and man-made disasters,” said Eric Burgener, research director for storage at IDC. “Furthermore, end users are expecting to access their data on a 7×24 basis, forcing enterprises to meet their demands for ‘always on’ application services.”

    Source: http://www.cablinginstall.com/articles/2014/11/brocade-datacenter-extension.html

    Reply
  13. Tomi Engdahl says:

    If It Ain’t Automated, You’re Doing It Wrong
    http://www.thenewip.net/author.asp?section_id=289&doc_id=710937&cid=oubtrain&wc=4

    In all the excitement over virtualization and the impact that NFV and SDN will have on telecom networks, one stark reality remains for every IP network operator: However you are evolving your network, if you aren’t automating the back-end processes, you’re doing it wrong.

    This has been a reality for telecom network operators for years now, and most have been working very hard at this task, not only because automation leads to higher service quality and faster service delivery but because it also generally means lower costs of operation.

    As those engaged in this process know all too well, introducing automation means extracting people, reducing the human error factor in the process, and enabling flow-through processes that start with the customer input.

    Introducing software-defined networks and network functions virtualization into the telecom world will enable a much greater degree of network programmability, with centralized control over network resources that allows them to be targeted and re-used in the way that meets customer demand in the most efficient way possible. Moving services to the cloud model then makes it possible to meet the on-demand needs of many enterprise customers.

    Reply
  14. Tomi Engdahl says:

    Intel’s Xeon Phi: After Knights Landing Comes Knights Hill
    by Ryan Smith on November 18, 2014 10:00 AM EST
    http://www.anandtech.com/show/8732/intels-xeon-phi-after-knights-landing-comes-knights-hill

    As SC’14 rolls on this week, taking part in the show’s events is Intel, who was at the show to deliver an update on the Xeon Phi lineup. As Intel already delivered a sizable update on Xeon Phi at ISC 2014 earlier this year, their SC’14 announcement is lighter fare, but we now know the name of the next generation of Xeon Phi.

    First and foremost, Intel has reiterated that the forthcoming Knights Landing generation of Xeon Phi is still on schedule for H2’15. Built on Intel’s 14nm process, Knights Landing should be a substantial upgrade to the Xeon Phi family by virtue of its integration of Silvermont x86 cores and a new stacked memory technology, Intel & Micron’s MCDRAM.

    Meanwhile Intel also used this occasion to announce the next generation of Xeon Phi. Dubbed Knights Hill, it will be built on Intel’s forthcoming 10nm process technology.

    Reply
  15. Tomi Engdahl says:

    Fujitsu boss sets CDOs against CIOs at annual do
    EMEAI chief tells traditionalists: ‘Don’t be a bottleneck’
    http://www.theregister.co.uk/2014/11/18/fujitsu_boss_sets_cdos_against_cios/

    Traditional IT and CIOs are often seen as a corporate bottleneck, Duncan Tait, the European boss of Fujitsu warned Tuesday, as chief digital officers increasingly take the reins.

    Such upstarts had little truck with traditional ways of doing IT, said Tait, which may in part explain why the Japanese IT giant has pledged to pour $354m into upgrading its own delivery business after years of balkanization and successive reorganisations left it fragmented.

    Who exactly will be making the final calls on IT strategy was also up for debate, Tait suggested, with “traditional IT” and IT bosses in danger of forming a decision-making bottleneck.

    CDOs – who, where they actually exist, may have more strategic sway than traditional IT ops types – often see CIOs as “not doing stuff properly”, Tait said.

    However, Tait argued that firms still needed people with traditional IT skills to guide them as they digitalised their businesses, namely building digital technology right through their value chain, even if the products remain analogue.

    However, those IT skills might come from an external partner – someone such as … Fujitsu.

    Reply
  16. Tomi Engdahl says:

    Running Debian on a Graphing Calculator
    http://hackaday.com/2014/11/18/running-debian-on-a-graphing-calculator/

    While the ubiquitous TI-83 still runs off an ancient Zilog Z80 processor, the newer TI-Nspire series of graphing calculators uses modern ARM devices. [Codinghobbit] managed to get Debian Linux running on a TI-Nspire calculator, and has written a guide explaining how it’s done.

    Reply
  17. Tomi Engdahl says:

    Nokia’s N1 Android Tablet Is Actually a Foxconn Tablet
    http://mobile.slashdot.org/story/14/11/19/0351243/nokias-n1-android-tablet-is-actually-a-foxconn-tablet

    “Nokia surprised everyone when it announced the N1 Android tablet during the Slush conference in Finland, today. This story has a twist, though: the N1 is not a Nokia device. Nokia doesn’t have a device unit anymore: it sold its Devices and Services business to Microsoft in 2013. The N1 is made by Chinese contract manufacturing company Foxconn, which also manufactures the iPhone and the iPad.

    In the case of N1, Foxconn will be handling the sales, distribution, and customer care for the device. Nokia is licensing the brand, the industrial design, the Z Launcher software layer, and the IP on a running royalty basis to Foxconn.

    Reply
  18. Tomi Engdahl says:

    The Big Data wrangling CIO you’ve probably never heard of: But his kit probably knows YOU
    The sprawling retail web estate handling 5m views per hour
    http://www.theregister.co.uk/2014/11/19/shop_direct_cio_goes_big_data/

    Shop Direct is a £1.7bn group that owns some of the best-known brands in retail – firms that pioneered what the cutting edge of shopping.

    Among the names it holds are Kays and Littlewoods, household brands that actually first pushed the idea of shopping without leaving your home to the UK using paper catalogues, home delivery and ordering over the phone.

    Today just 20 per cent of business is done on dead or recycled trees versus 80 per cent online. Driving online sales are the so-hip-they hurt Very.co.uk and ISME, launched in 2009 and 2011 respectively.

    But at Shop Direct, even the traditional notion of the website is being changed: with 44 per cent of online sales via mobile, Shop Direct has rolled out native apps for iOS and Android. Next year, it throws up the group’s first stand-alone retail site tailored to a specific demographic – women aged 26 to 35 interested in designer brands.

    Reply
  19. Tomi Engdahl says:

    HDS: Storage? Pah! We’re working on a SMART CITY OPERATING SYSTEM
    Converged, hyperscale Internet of Things and smart cities rig
    http://www.theregister.co.uk/2014/11/19/hds_working_on_hyperscale_converged_iot_and_smart_cities_rig/

    Hitachi Data Systems (HDS) is working on two projects that will scale its compute and storage technologies to serve Internet of Things deployments in smart cities.

    Speaking yesterday at the company’s innovation forum in Singapore, Asia-Pac CTO Adrian De Luca mentioned the company’s recently announced decision to build an EVO:RAILS box using VMware’s template for hyperconverged kit.

    Details of just when either will emerge weren’t discussed, but De Luca’s talk suggested a certain urgency.

    The two projects probably need to be understood in the context of the company’s innovation forum, an event at which the Hitachi group talked up its ambition to assist “social innovation” with its many products.

    No less a person than Yukata Saito, Hitachi’s fifth-ranking executive in the organisation, opened the event with a vision for using all of the company’s assets to develop analytics-driven products and services that enable governments to meet the demands of swelling populations and the stresses they place on resources.

    Enthusiasm for this concept is high in Asia. Mobile device penetration is soaring across even the region’s less prosperous nations and citizens are keen for better service delivery from their governments and businesses.

    Reply
  20. Tomi Engdahl says:

    DRaaS-tic action: Trust the cloud to save your data from disaster
    Accidents happen…
    http://www.theregister.co.uk/2014/11/19/disaster_recovery/

    In modern computing, disaster recovery can be thought of in the same way as insurance: nobody really wants to pay for it, the options are complicated and seemingly designed to swindle you, but it is irrational (and often illegal) to operate without it.

    All the big IT players are getting into disaster recovery as a service (DRaaS), and many of the little ones are too.

    The core concept is simple: someone with a publicly accessible cloud stands up some compute, networking and storage and lets you send copies of your data and workloads into their server farm.

    If your building burns down or some other disaster hits your company, you can log into the DRaaS system, push a few buttons and all the IT for your entire business is up and running in moments. If only car insurance were that easy.

    But like car insurance, DRaaS comes in flavours. There are so many options from so many vendors that the mind boggles.

    Prices and capabilities vary wildly. Perhaps most importantly, the amount of effort required to make the thing work properly, and keep it working, can vary quite a bit too.

    Simply using software as a service offerings for critical functions and letting the rest burn is not particularly rational either. Public cloud services still need to be backed up.

    Vendors go under. Some putz could hack your account and delete everything. A plane could fall out of the sky and land directly on the storage array containing the only copy of your data.

    So you cannot avoid disaster recovery planning. You can, of course, set up your own disaster recovery solution. Go forth and build your own data centre, or even just toss a server in a colo.

    Both are excellent options, if the circumstances, requirements and budget of the company are right. For everyone else, there’s DRaaS.

    Reply
  21. Tomi Engdahl says:

    EPEL Orphaned packages and their dependents to be removed Dec 17th
    http://www.karan.org/blog/2014/11/13/epel-orphaned-packages-and-their-dependents-to-be-removed-dec-17th/

    The EPEL repository runs from within the Fedora project, sharing resources ( including, importantly their source trees ) with the Fedora ecosystem; over the years its proven to be a large and helpful resource for anyone running CentOS Linux.

    One key challenge they have however, much like CentOS Linux, is that the entire effort is run by a few people helped along by a small group of voulenteers. So while the package list they provide is huge, the people putting in the work behind it is small. A fallout from this is that over the years a significant chunk of packages in the EPEL repo are now orphaned. They once had a maintainer but either that maintainer has gone away now, or has other priorities.

    A few days back, Steven announced that they were going to start working to drop these orphaned packages unless someone steps up to help maintain them. You can read his announcement here : https://lists.fedoraproject.org/pipermail/epel-devel/2014-November/010430.html

    Reply
  22. Tomi Engdahl says:

    FPGA to accelerate x86 servers

    Next generation of servers will use programmable FPGA-circuits in parallel with the CPU to accelerate the various computing processes. IBM and Xilinx demoavat already have such POWER8-based solutions to the New Orleans Super Computing event.

    Servers has long been accelerated by moving the calculation of the graphics processors. FPGA circuits may provide a lot more power out of a much lower power consumption, but the development has been very demanding and time-consuming.

    This has been so far. Now, Xilinx has developed a SDAccel-tools to FPGA circuits is possible to develop algorithms in C, C ++ – and OpenCL languages. Only when the programming is complete, the code can be translated FPGA.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=2094:fpga-kiihdyttaa-x86-palvelimia&catid=13&Itemid=101

    Reply
  23. Tomi Engdahl says:

    All-singing-all-dancing hyperconverged Maxta gets Cisco’s blessing
    Follows Simplivity, but with added metro cluster support
    http://www.theregister.co.uk/2014/11/19/hyperconverged_maxta_gets_cisco_blessing/

    Maxta has announced the launch of its MxSP, the first Cisco-certified hyper-converged system to run on UCS C-Series servers and support metro-distance clusters.

    But how can a storage software company supply a hyperconverged server/storage/networking/software system when all it does is create the storage software component?

    Maxta claims its “software-defined storage solutions provide organisations the choice to deploy hyper-convergence on any x86 server, use any hypervisor, and any combination of storage devices”.

    Um, marketing alert, marketing alert. What Maxta seems to be saying is that customers (aka organisations) can deploy virtualised X86-based servers and attach any storage devices to them to create a hyperconverged system.

    Instead of a hyperconverged system being a single IT appliance-like entity bought with a single SKU, as with Nutanix and Simplivity (OmniStack) boxes, it’s now the result off an on-site integration exercise resulting in a scale-out cluster of software/server/storage/networking nodes.

    No networking switch is included but VMware vSphere Metro Storage Cluster is supported, providing the ability to replicate data across data centres and so provide protection against data centre failure.

    Reply
  24. Tomi Engdahl says:

    How IT will evolve to photonics
    Professor Rod Tucker charts a course to the all-optical, low-energy future
    http://www.theregister.co.uk/2012/11/26/interview_rod_tucker/

    Replacing electronics with photonics will one day be an important way to run IT while consuming far less power than is the case today. But while that idea looks great on paper, the research is still young.

    The Internet’s voracious appetite for electricity needs some near-term solutions, so asThe Register followed-up the our piece on photonics we also spoke to Professor Rod Tucker of the University of Melbourne, director of both the Institute for a Broadband-Enabled Society and the Centre for Energy-Efficient Telecommunications.

    Why ‘slow light’ might just save the Internet
    Photonics lights a path to a high-speed, low-energy, Internet
    http://www.theregister.co.uk/2012/11/24/cudos_photonics_australian_institute_physics/

    Reply
  25. Tomi Engdahl says:

    Data-center upstart grabs Wozniak, jumps into virtual storage fight
    Primary Data launches at EMC and Quantum with ‘data hypervisor’
    http://www.theregister.co.uk/2014/11/19/primary_data/

    A fresh startup called Primary Data reckons it will reinvent “file virtualization” for software-defined data centers – and thus take on EMC’s ViPR and Quantum’s StorNext.

    Primary Data, cofounded by David Flynn, made the boast as Apple cofounder Steve Wozniak announced he has left Fusion-io to join Flynn at Primary Data as chief scientist.

    Now Flynn and Woz are back on the same team – and Primary Data is emerging from stealth mode to reveal an outline of what it’s developing.

    Flynn, Primary Data’s CTO, said: “Data virtualization is the inevitable next step for enterprise architectures, as it seamlessly integrates existing infrastructure and the full spectrum of specialized capabilities provided by ultra-performance and ultra-capacity storage resources.”

    The technology defines a “data hypervisor” that hides all the storage hardware and software below a global file namespace. There are separate channels for sending and receiving data, and for controlling access to the stored data.

    Admins can set policy definitions for placing and moving information, with rules reflecting storage performance, price and protection needs.

    The data hypervisor allows clients to dip into the storage systems in a protocol-agnostic way. Primary Data’s software does the hard work underneath to provision capacity and keep the bytes in line.

    Reply
  26. Tomi Engdahl says:

    Qualcomm to Build ARM-Based Server Chips
    http://www.eweek.com/servers/qualcomm-to-build-arm-based-server-chips.html

    CEO Steve Mollenkopf gave few details, but Qualcomm will present a challenge to both smaller ARM server chip makers and dominant player Intel.
    Qualcomm, the world’s top mobile chip maker, is ready to get into the crowded ARM-based server chip business.

    At the company’s annual analyst day Nov. 19 in New York, CEO Steve Mollenkopf said company engineers have been working on the technology “for some time. Now we are going to have a big product that goes into the server.”

    - See more at: http://www.eweek.com/servers/qualcomm-to-build-arm-based-server-chips.html#sthash.TmMFBrzp.dpuf

    Reply
  27. Tomi Engdahl says:

    Intel Merges Mobile, PC Divisions
    Promise of a big smartphone division fades
    http://www.eetimes.com/document.asp?doc_id=1324682&

    In a move to streamline business and combat dismal mobile financial results, Intel will merge its mobile and PC divisions in early 2015.

    The Mobile and Communications Group reported a $1 billion operating loss in the third quarter of 2014, with a $1 million revenue drop year-over-year. The mobile chip group will join Intel’s profitable PC Client Group — which saw a 6% increase in revenue to $9.2 billion in the third quarter — under PC Group vice president Kirk Skaugen.

    Reply
  28. Tomi Engdahl says:

    Firefox Signs Five-Year Deal With Yahoo, Drops Google as Default Search Engine
    http://tech.slashdot.org/story/14/11/19/2313217/firefox-signs-five-year-deal-with-yahoo-drops-google-as-default-search-engine

    Google’s 10-year run as Firefox’s default search engine is over. Yahoo wants more search traffic, and a deal with Mozilla will bring it. In a major departure for both Mozilla and Yahoo, Firefox’s default search engine is switching from Google to Yahoo in the United States.

    Firefox drops Google as default search engine, signs five-year deal with Yahoo
    http://www.theverge.com/2014/11/19/7250513/firefox-signs-yahoo-as-default-search-engine-

    Today, Yahoo and Mozilla announced a five-year partnership that would make Yahoo the default US search engine for Mozilla’s Firefox browser on mobile and desktop. In December, Yahoo will roll out an enhanced new search function to Firefox users, and will also support Do Not Track functions in Firefox as a result of the partnership.

    Reply
  29. Tomi Engdahl says:

    Giving mobile users the applications they want is child’s play
    Buy in or magic up your own
    http://www.theregister.co.uk/2014/11/20/application_programming/

    Working on the move has become most people’s normal way of operating. We are used to having our world in our pocket and being able to read and write emails, produce simple documents and generally stay in the corporate loop whether we are in the office, in the pub, on a train or (sadly) sitting on a beach trying to be on holiday.

    To make this possible, applications have moved away from the desktop and laptop.

    The order of the day is being able to run our core applications from the phones or mini-tablets which it is second nature to carry about with us.

    This gives us three choices. First we have browser-based applications: since every phone and tablet has a web browser (probably several if it is anything like mine) the simplest way to go is to use browser-based versions of those applications.
    That’s all very well, but using a web GUI on a small phone – iPhone-sized, say – is fiddly and something you won’t enjoy if you do it a lot.

    Next we have remote desktop sessions presenting what is basically a corporate PC desktop via a virtualisation technology from the likes of Citrix or VMware.
    Now, a remote desktop on a full-fat iPad is just about usable, and on a midi-sized device it is kind-of okay as long as you don’t do it a lot.

    Let’s hope, then, that the final option is reasonably attractive. So what is it? Easy: applications that run natively on your mobile device and hence provide a user interface and feature set that were designed into the apps in the first place.
    This article will look at the feasibility of providing your roving users the ability to run things natively on the devices they carry around with them.

    Reply
  30. Tomi Engdahl says:

    What should America turn to for web advice? That’s right: GOV.UK – says ex-Obama IT guru
    Uncle Sam could learn a thing or two from Brits
    http://www.theregister.co.uk/2014/11/20/jennifer_pahlka_on_tech_and_government/

    It’s not the most obvious place you would expect a Silicon Valley-ite to point to as the future of the America, but Jennifer Pahlka is a big fan of the UK government’s website.

    Pahlka heads up Code For America, an organization helping the public sector make better use of IT. She’s just finished a stint in Washington DC as deputy chief technology officer of the United States.

    “The sense in Washington appears to be ‘let’s let Silicon Valley people teach us for a while’, but what they need to think about is how we can rethink the whole approach,” Pahlka told a meeting of open data advocates in Oakland City Hall, California, on Tuesday night.

    Where should people look for guidance? The UK, and in particular the Government Data Service (GDS) and its head Mike Bracken. The GDS team is behind Blighty’s GOV.UK website.

    Meet the nerd with ‘more power than a geek has ever had in the US government

    Dickerson now has “more power than a geek has ever had in the US government,” explained Pahlka. But Dickerson and the department still have significantly less sway that their UK counterparts.

    The Healthcare.gov debacle brings out the second most important aspect of governing in the 21st century: more geeks in government.

    “You can’t govern now unless you have people who understand technology,” Pahlka enthused. “If you can’t implement the policies or the laws passed, then you can’t govern.” Healthcare.gov serves as a perfect example of where a new law and signature policy was at risk due to the inability of Washington to deliver technical excellence.

    An incredible 94 per cent of IT projects carried out by the federal government fail under their own definition of “fail”, and 40 per cent of them see the light of day, Pahlka said. When prodded by an audience member that the same number of startups fail, she fired back: “Yes, but most of them don’t blow through $2bn.”

    Reply
  31. Tomi Engdahl says:

    A CFO’s View of Consumer Data
    http://www.cio.com/article/2834767/big-data/a-cfos-view-of-consumer-data.html

    CIO’s Martha Heller talks to Greg Walsh, CFO of IPG Mediabrands, about how big data and second screens are transforming media buying.

    How is technology changing your business? We have a goal that by 2015, 50 percent of our buying will be automated, which includes even the most traditional media buying processes. Much of this is focused on programmatic buying, where we use data to buy audiences in real time. That’s very different from buying TV way in advance of the event and then seeing the results after the fact.

    As your industry has become more technologically-driven, how has your role changed? Our biggest cost and asset is our people. The second is our technology. So just as I need to understand how we are developing our talent, I need to understand our technology investments. Technology tends to threaten people; it is my role to explain how it is making them more effective, not replacing them. As a result, I work very closely with our chief HR officer, Alastair Procter, and our CIO, Sam Chesterman.

    How are consumer technologies affecting your industry? It’s the evolution of the second screen, where people watch TV but there is also a second or even a third screen. The more screens people are on, the more ways we can reach them. We need to recognize how many screens people are on and what drives them from one to the other.

    Reply
  32. Tomi Engdahl says:

    CIOs Must Market IT’s Value
    http://www.cio.com/article/2835675/it-strategy/cios-must-market-its-value.html

    Why don’t more CIOs make it a serious priority to market IT internally? Done well, it shows the business value of IT and gives visibility to top performers, says CIO Publisher Adam Dennison.

    Reply
  33. Tomi Engdahl says:

    How to Fish for (and Land) IT Talent
    http://www.cio.com/article/2835765/careers-staffing/how-to-fish-for-and-land-it-talent.html

    CIOs need to be deeply involved in writing IT job postings — not just leave it to the HR admin — to lure great hires.

    Reply
  34. Tomi Engdahl says:

    Firefox drops Google as default search engine, signs five-year deal with Yahoo —
    Default search engine for Firefox in Europe remains Google, Yandex in Russia, Baidu in China —

    Mozilla CEO: It Wasn’t Money — Yahoo Was The Better Strategic Partner For Firefox
    http://marketingland.com/mozilla-ceo-yahoo-better-firefox-partner-108539

    Mozilla CEO Chris Beard says when presented with deals of “equivalent economics” to power search in Firefox, Yahoo was the better partner.

    The official line from the Mozilla blog post about the deal helps parse what being a good strategic partner seems to be.

    New Search Strategy for Firefox: Promoting Choice & Innovation
    https://blog.mozilla.org/blog/2014/11/19/promoting-choice-and-innovation-on-the-web/

    Reply
  35. Tomi Engdahl says:

    Huawei: KRYDER STORAGE CRISIS is REAL and ‘we’re working on it’
    The shark threatening to eat big data just got bigger
    http://www.theregister.co.uk/2014/11/20/huawei_the_kryder_crisis_is_real_and_were_working_on_it/

    Huawei’s chief storage boffin says the company has looked at the looming crisis in storage technology with concern.

    Until 2010, the cost of storage per dollar had dropped exponentially. The decline has been dubbed the Kryder rate, or “Kryder’s Law”, a reference to an article in a 2005 issue of Scientific American of the same name where Seagate CTO Mark Kryder declared that disk drive areal density would more than double every two years, meaning disk drive capacity would do likewise.

    But just as semiconductor density over time has diverged from Moore’s Law – it’s no longer doubling every 18 months – the Kryder rate has diverged too, as we described here.

    The divergence has huge implications for Big Data businesses predicated on ever-cheaper storage. “It’s not that Big Data won’t happen,” David Rosenthal, the veteran engineer who highlighted the crisis, told us recently, “but it will be more expensive than people realise.”

    Reply
  36. Tomi Engdahl says:

    YOU are the threat: True confessions of real-life sysadmins
    Who will save the systems from the men and women who save the systems from you?
    http://www.theregister.co.uk/2014/11/19/the_enemy_within/

    Some sysadmins will go to extremes to secure a network, viewing it (wrongly) as their property.

    For proof, look no further than Terry Childs, the City of San Francisco sysadmin who lost his job and subsequently refused to give over the system’s virtual keys to his superiors in 2008.

    It took just under a million dollars, several weeks, and the concerted efforts of several equipment vendors to put things right.

    Childs had configured the equipment (predominantly Cisco) so securely that not only did no other administrator have rights to the switches and routers, but configs were not saved – so any power loss or attempt to reboot the switch or router into recovery mode would not work.

    “One admin said that given the right amount, he would compromise the system. Interestingly, the administrator stated that the amount had to be big enough so that they would not have to work again. This decision was based on the fact no one would ever employ them again.”

    “Some bigger companies now implement more stringent background checks including financial screening and crime screening. The general view on these checks is that they have limited use.”

    Reply
  37. Tomi Engdahl says:

    As Firefox dumps Google for Yahoo, is the clock ticking for Mozilla?
    http://www.theguardian.com/technology/2014/nov/20/firefox-google-yahoo-mozilla

    Firefox users in the US will now see Yahoo as their default browser, but Google may have allowed itself to be outbid

    Reply
  38. Tomi Engdahl says:

    Intel Planning Thumb-Sized PCs For Next Year
    http://hardware.slashdot.org/story/14/11/20/2217257/intel-planning-thumb-sized-pcs-for-next-year

    Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that,” said Kirk Skaugen

    Intel planning thumb-sized PCs for next year
    The devices will plug into smart TVs and monitors
    http://www.computerworld.com.au/article/560171/intel-planning-thumb-sized-pcs-next-year/

    Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year.

    The stick will plug into the back of a smart TV or monitor “and bring intelligence to that,” said Kirk Skaugen, senior vice president and general manager of the PC Client Group at Intel, during the Intel investor conference in Santa Clara, California, which was webcast.

    A device the size of a USB stick was shown on stage, but its capabilities were not demonstrated. Skaugen said the devices will be an extension to laptops and mini-desktops, which have Core desktop processors in small PCs that can be handheld.

    Skaugen likened the compute stick to similar thumb PCs offered by PC makers with the Android OS and ARM processor. Dell’s US$129.99 Wyse Cloud Connect — which plugs into an HDMI port — can turn a screen or display into a PC, gaming machine or streaming media player.

    Such devices typically don’t have internal storage, but can be used to access files and services in the cloud. The Wyse Cloud Connect has Wi-Fi and Bluetooth.

    Reply
  39. Tomi Engdahl says:

    CERN IT boss: What we do is not really that special
    You’ll all be doing the same – in about 10 years’ time
    http://www.theregister.co.uk/2014/11/20/cern_it_chief_on_clouds/

    When the head of infrastructure services at CERN tells you that he has come to the conclusion that there’s nothing intrinsically “special” about the systems at the multi-billion atom-smasher, you naturally want to check you’ve heard correctly.

    After all, when we sat down with Tim Bell at the OpenStack Summit in Paris recently, it was rather noisy as around 5,000 visibly excited engineers techies swapped war stories about the open cloud computing platform while vendors hurled hospitality and job offers at them.

    Few of them, however, would be running the sort of systems for which Bell and his team are responsible: a 100PB archive growing at 27PB a year, with 11,000 servers supporting 75,000 disk drives and 45,000 tapes. And that data is being thrown off by the machine that recently found the Higgs Boson, the so-call God particle. Most tech managers would say that’s up the upper end of data infrastructure challenges.

    But two years ago, when Bell and his team started planning for the upgrade, it was time to do some hard thinking – a not uncommon practice at CERN you’d assume. Even the world of top-end physics has to operate within human laws such as economics – to some degree anyway. And, according to Bell, this means no more staff, a decreasing materials budget, and legacy tools that are “high maintenance and brittle”. And just in case you were wondering, the “users” expect fast self-service.

    “The big thing in this case was to apply that to the IT department … we were basically challenging some fundamental assumptions that CERN has to create its own solutions. That they’re special.”

    “There are clearly some special parts,” says Bell. “But there are also often things that are of interest to other people. The key thing to avoid is where we end up doing something that is similar to what is being done outside.”

    The IT on-site at CERN has been supplemented by a new data centre in Hungary. Even so, Bell continues, “What we needed to appreciate was the extent to which the organisation needed to change as well as it just being a matter of installing some more servers.”

    Hence the decision to get up close and personal with OpenStack in general and Rackspace in particular. It might be worth noting that, back when we wrote this, the firehose Bell’s team was drinking was pumping out a mere 25PB a year.

    “After a few months of prototyping then we had the basis to set in place something where we could map out the roadmap to retire the legacy and the legacy environment. The decommissioning of it started on the 1st of November,” Bell says. “So in 18 months we basically produced a tool chain [which is] replacing the legacy environment that we’d run for the previous 10 years.”

    “Many times people are joining CERN with the knowledge of the tools from university,” says Bell. “So it means that the training time is considerably less – you can buy a book that will tell you about Puppet whereas in the past you would have had to sit down with the guru to understand how the old system worked.”

    “Now in this case what’s great is that we take a Linux expert out of university and we produce someone that’s trained in Openstack and Puppet and they find themselves in a lot of demand at such time as they have a contract at CERN finish.”

    And what happens to all of these staff? “Many of them are working at the companies that have been collaborating with us around open source. So one of the good things about summits is that the people come here they network and as part of the collaboration software development – the companies are able also to understand the quality of the people we have at CERN and therefore many of them are in high demand at such time as their contracts come to an end.”

    “So amongst other things, we at CERN developed the active directory driver for OpenStack and now with Rackspace we’ve done the federated identity to allow multiple clouds to talk to each other.”

    “As part of CERN’s mission, it’s not only the physics. There is a clear goal for CERN to act also as a goal for people to arrive spend a short period of time at CERN – up to five years on short term contract – and then return to their home countries with those additional skills. That could be engineering, [equally] it could be physics and computing.”

    Reply
  40. Tomi Engdahl says:

    Intel offers ingenious piece of 10TB 3D NAND chippery
    The race for next generation flash capacity now on
    http://www.theregister.co.uk/2014/11/21/intel_offering_an_ingenious_piece_of_10tb_3d_nand_chippery/

    IMTF, Intel Micron Flash Technologies, a partnership between Intel and Micron, has a 3D MLC NAND technology, which will be used to build 10TB SSDs in two years.

    With 3D flash a die is made up from layers of ordinary or 2D planar cells stacked (as it were) one above the other.

    The news came in a webcast for Intel investors yesterday, 20 November, with Rob Crooke, veep and GM of Intel’s non-volatile memory solutions group, revealing the development:

    32 planar layers
    c4 billion interconnect pillars between the layers
    256Gbit – 32GB – of capacity in a die using MLC (2 bits/cell) NAND
    348Gbit – 48GB – using TLC (3 bits/cell) NAND

    The process geometry was not revealed but is thought to be 3X – 39-30nm.

    Crooke foresaw 10TB SSDs in two years, meaning (we’d assume) late 2016/early 2017, and promised disruptive costing, meaning (again, we’d assume) per-GB pricing nearer that of disk.

    Other 3D flash initiatives are coming from Hynix, Samsung, and SanDisk, with Samsung being the most advanced.

    Reply
  41. Tomi Engdahl says:

    Eyes-on with Streaming Photoshop: Adobe’s plan to bring PS to the cloud
    More details about Adobe’s upcoming cloud-based version of Photoshop.
    http://arstechnica.com/gadgets/2014/11/eyes-on-with-streaming-photoshop-adobes-plan-to-bring-ps-to-the-cloud/

    Reply
  42. Tomi Engdahl says:

    Eyes-on with Streaming Photoshop: Adobe’s plan to bring PS to the cloud
    More details about Adobe’s upcoming cloud-based version of Photoshop.
    http://arstechnica.com/gadgets/2014/11/eyes-on-with-streaming-photoshop-adobes-plan-to-bring-ps-to-the-cloud/

    The primary purpose of Photoshop-in-a-browser is to get the app running on Chrome OS, which pretty much can only run a browser. Chrome OS has taken off as a competitor to Windows—the NPD’s last estimate put it at 35% of commercial notebook sales—but it lacks a few killer apps like Photoshop. The other benefit is that you can now run Photoshop on just about any computer without having to worry about RAM and CPU usage, since all the computer has to display is a video stream. Adobe says even the $200 Chromebooks on the market today should be fast enough to handle Streaming Photoshop.

    Reply
  43. Tomi Engdahl says:

    With Assembly, anyone can contribute to open-source software and actually get paid
    A new startup wants to evolve open-source methods, adding a wide range of skills and profit sharing
    http://www.theverge.com/2014/11/21/7258667/assembly-collaborative-work-open-source

    The open-source movement has produced some of the most widely utilized software in the world, a huge economic value driven by a widely dispersed community who believe contributing good work is often its own reward. Outside of the world of computer science, however, these strategies are still relatively niche. A San Francisco startup called Assembly is trying to change all that, by evolving the open-source model to easily incorporate disciplines outside coding and to include a shared profit motive as well. Today the company is announcing a $2.9 million round of funding it will use to help expand its platform.

    With Assembly, a part-time entrepreneur like Kaneda can open source any number of tasks he might need for his business: designing a new logo, creating an email marketing campaign, and researching the best cloud-hosting solution, for example. Since the business doesn’t have outside funding or stock options, Assembly lets him set the reward as a percentage of future earnings and handles the work of dividing and distributing that revenue stream.

    Reply
  44. Tomi Engdahl says:

    Amazon’s AppStream can now stream any Windows application
    Summary: Amazon expands support for its streaming service that targets games developers to any Windows application.
    http://www.zdnet.com/amazons-appstream-can-now-stream-any-windows-application-7000036050/

    Amazon Web Services (AWS) has updated AppStream to allow any Windows application to be accessed through a browser.

    Besides offloading graphics workloads to AppStream so that less powerful devices can access heavy duty applications, AWS developers can now use AppStream to deliver any Windows application to non-Windows devices such as FireOS, Android, Chrome, iOS, Mac OS X, as well as Windows devices.

    “You can now stream just about any existing Microsoft Windows application without having to make any code changes,” AWS evangelist Jeff Barr wrote.

    Reply
  45. Tomi Engdahl says:

    AMD Integrates X86, GPU & I/O
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1324734&

    Early next year, AMD will ship Carrizo, its most integrated x86 processor to date, combining I/O with — in some versions — new x86 and GPU cores.

    Advanced Micro Devices announced at its “Future of Compute” event in Singapore two new integrated x86 processors on its roadmap, Carrizo and Carrizo-L. The chips are AMD’s most integrated parts to date, putting not only the CPU and GPU but the south bridge on a single die, a design move that should improve performance and certainly costs.

    The new parts replace the current Kaveri and Beema chips with ones AMD says will deliver a significant leap in performance and energy efficiency in 2015, targeting business and consumer markets.

    Both new chips will support DirectX 12, OpenCL 2.0, AMD’s Mantle, and Freesync, and Windows 10. They will be AMD’s first integrated processors complaint with the full HSA 1.0 specification that AMD has spearheaded for chips that merge CPU and GPU cores as equals.

    Reply
  46. Tomi Engdahl says:

    Too 4K-ing expensive? Five full HD laptops for work and play
    High-def desirables for a decent price
    http://www.theregister.co.uk/2014/11/24/product_round_up_five_full_hd_laptops/

    The intense competition in the PC market means that you can now get some pretty decent laptops for less than £500. However, one common cost-cutting measure employed by most budget laptops is the use of a lower-res display – typically just 1366×768 pixels. Most likely these displays will still be called HD in the blurb, but that’s that because these panels easily accommodate 720p video – 1280 x 720 pixel resolution.

    For the more discerning eye, that’s just not enough, and while we’ll be looking at the more expensive HiDPI laptops soon, full HD laptops are certainly more affordable these days, especially if you’re prepared to trade having a high-performance CPU or a speedy solid-state drive for a crisper, higher resolution image instead.

    Reply
  47. Tomi Engdahl says:

    Shaking that AAS: It’s time for vendors to stop selling storage
    They should follow Amazon’s lead
    http://www.theregister.co.uk/2014/11/24/stop_selling_storage/

    a number of meetings with storage vendors

    Almost without exception, they mentioned AWS and the other large cloud vendors as a major threat and compared their costs to them.

    We’ve all seen the calculation and generally we know that for many large Enterprises that the costs often favour the traditional vendors; buying at scale and at the traditionally large discounts mean that we get a decent deal. Storage turns out to be free at the terabyte level and only becomes an appreciable cost once we start getting to the petascale – this is pretty much true for both the cloud providers and the traditional vendors.

    But when I look round the room in a normal sales presentation/briefing, it is not uncommon for the vendor to have four or five people present, often outnumbering the number of customers in the room: account salesman, product salesman, account technical specialist, product technical specialist and probably a couple of hangers-on. A huge cost to the vendor and hence to me as a customer.

    And then if we decide that we want to purchase the storage, we then drift into the extended procurement mode. Our procurement and finance teams will talk to the vendor teams and there may well also be legal teams and other meetings to deal with. The cost to both the vendor and the customer is enormous.

    However, if we go to a cloud vendor, we generally deal with a website. The cost is there: it’s displayed to all and the only discounts we get are based around volume.

    It seems to me that if the traditional storage vendors really want to compete with the cloud vendors, they need to change their sales model completely.

    This means stripping out huge amounts of the cost of sale

    Basically, vendors should stop selling storage. Instead they should build better products, sensible marketing and reduced friction to acquisition.

    Reply
  48. Tomi Engdahl says:

    How Twine, a creation tool for text-based video games, brought more diverse voices to gaming

    Twine, the Video-Game Technology for All
    http://www.nytimes.com/2014/11/23/magazine/twine-the-video-game-technology-for-all.html

    Perhaps the most surprising thing about “GamerGate,” the culture war that continues to rage within the world of video games, is the game that touched it off. Depression Quest, created by the developers Zoe Quinn, Patrick Lindsey and Isaac Schankler, isn’t what most people think of as a video game at all. For starters, it isn’t very fun. Its real value is as an educational tool, or an exercise in empathy. Aside from occasional fuzzy Polaroid pictures that appear at the top of the screen, Depression Quest is a purely text-based game that proceeds from screen to screen through simple hyperlinks, inviting players to step into the shoes of a person suffering from clinical depression.

    Quinn had created graphically oriented games before, including the satirical Ghost Hunter Hunters. But she decided to make Depression Quest through an increasingly popular program called Twine. Although it’s possible to add images and music to Twine games, they’re essentially nothing but words and hyperlinks; imagine a digital “Choose Your Own Adventure” book, with a dash of retro text adventures like Zork. A free program that you can learn in one sitting, Twine also allows you to instantly publish your game so that anyone with a web browser can access it. The egalitarian ease of Twine has made it particularly popular among people who have never written a line of code — people who might not even consider themselves video-game fans, let alone developers. Chris Klimas, the web developer who created Twine as an open-source tool in 2009, points out that games made on it “provide experiences that graphical games would struggle to portray, in the same way books can offer vastly different experiences than movies do. It’s easy to tell a personal story with words.”

    Twine represents something radical: the transformation of video games into something that is not only consumed by the masses but also created by them.

    Twine
    http://twinery.org/

    Twine is an open-source tool for telling interactive, nonlinear stories.

    You don’t need to write any code to create a simple story with Twine, but you can extend your stories with variables, conditional logic, images, CSS, and JavaScript when you’re ready.

    Twine publishes directly to HTML, so you can post your work nearly anywhere. Anything you create with it is completely free to use any way you like, including for commercial purposes.

    Twine has been used to create hundreds of works

    Reply
  49. Tomi Engdahl says:

    Alva Noe: Don’t Worry About the Singularity, We Can’t Even Copy an Amoeba
    http://tech.slashdot.org/story/14/11/23/2342259/alva-noe-dont-worry-about-the-singularity-we-cant-even-copy-an-amoeba

    “Writer and professor of philosophy at the University of California, Berkeley, Alva Noe, isn’t worried that we will soon be under the rule of shiny metal overlords. He says that currently we can’t produce “…machines that exhibit the agency and awareness of an amoeba.”

    Artificial Intelligence, Really, Is Pseudo-Intelligence
    http://www.npr.org/blogs/13.7/2014/11/21/365753466/artificial-intelligence-really-is-pseudo-intelligence

    One reason I’m not worried about the possibility that we will soon make machines that are smarter than us, is that we haven’t managed to make machines until now that are smart at all. Artificial intelligence isn’t synthetic intelligence: It’s pseudo-intelligence.

    This really ought to be obvious. Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

    Philosophers and biologists like to compare the living organism to a machine. And once that’s on the table, we are lead to wonder whether various kinds of human-made machines could have minds like ours, too.

    But it’s striking that even the simplest forms of life — the amoeba, for example — exhibit an intelligence, an autonomy, an originality, that far outstrips even the most powerful computers. A single cell has a life story; it turns the medium in which it finds itself into an environment and it organizes that environment into a place of value. It seeks nourishment. It makes itself — and in making itself it introduces meaning into the universe.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*