LinkedIn Twitter
CEO of Deal Architect, a top advisory boutique recognized in The Black Book of Outsourcing, author of a widely praised book on technology enabled innovation, The New Polymath, prolific blogger, writing about technology-enabled innovation at New Florence, New Renaissance and about waste in technology at Deal Architect.  Previously Analyst  at Gartner, Partner with PwC Consulting. Keynoted at many business and technology conferences and has been quoted in the Wall Street Journal, BusinessWeek, The Financial Times, CIO Magazine, and other executive and technology publications.

3 responses to “Hasso’s Dozen”

  1. Hasso’s blog post needs one more element | sa-portals

    […] Read the source article at Enterprise Irregulars […]

  2. Vinnie Mirchandani for Dr. Plattner

    Dr. plattner responded to me on his blog. His comments

    “hi vinnie,

    new business models – please look at sap’s business network.

    the business suite as on demand service is available in the hana enterprise cloud. some customers prefer to buy the software. once you know you will use it for 4+ years it could be financially more attractive.

    the data center comments are interesting and i will let helen arnold , sap’s cio, talk to you.

    that r/3 didn’t use stored procedures is true. the sERP version of the suite on hana not only dropped the transactionally maintained aggregates and all redundant materialized views, but heavily uses stored procedures and other libraries of the hana platform. the application code is being simplified dramatically. the transactional performance increases accordingly.

    the hardware issue has several aspects. the certification process of hana made people think that hana requires a special hw. that is not true. sap only wanted to make sure that the configuration recommendations were followed. i believe, vendors can now selftest their configurations. one reason was: hana needs dram. but how much? let’s take sap’s erp system: the hot data will require less than 500 gigabytes (i predict less than 300), the cold data will then be around 500-700 gigabytes. the data requirements are in fact really small. for the hot data i recommend that all data is always in memory. for ha a hot standby, which will be used for read only applications, i recommend a hana maintained identical replica. any of the two terabyte single image systems is good enough. for the cold data you can use similar hw or a cheaper scale out approach with smaller blades. not all cold data will stay loaded in ram. a purging algorithm will remove data without access requests. sap is in the top 5% (a guess) of sap users. the largest smp system available for hana has currently 32 cpus with 480 cores and 24 terabytes. i don’t see any hw capacity problem. the pricing varies from vendor to vendor, but the fact that there are several vendors will take care of it. so where is the problem? the hot/cold split hasn’t yet shipped and the the purge algorithm is only in the later releases of hana.

    it will be soon available like the hana managed replication. i urge every suite on hana customer on premise to contact sap for sizing and configuration assistance. the sERP simplification will finish this year, sFIN is already shipping and doing great. this is what i meant with trusting sap with regards to keeping delivery promises.

    the data explosion is taking place with mostly read only data (text, video, sensor output, etc) which can easily be organized in a scale out fashion on cheap hw. actually, hana is happy to calculate the indexing and keep data only as indices in ram for processing.

    thanks for the feedback. as you know i take blogging seriously. everybody around sap tries to scale up and make the value proposition attractive, we are moving much faster than in the r/3 days – in a much larger market.

  3. Vinnie Mirchandani

    I responded to Dr Plattner to comment above on his blog

    Dr Plattner, I can debate many of your points, but ideally that is done in person.

    I do want to pick on one of your points. You say “the (hardware) pricing varies from vendor to vendor, but the fact that there are several vendors will take care of it. so where is the problem?”

    Like you I believe in competitive marketplaces and price equilibrium. But in the SAP economy, as my book research shows over and over, even with plenty of choice in R3/ECC systems integration, hosting, offshore application management, MPLS circuits and other elements, the prices have stayed shockingly expensive and the project failure rate/ticket volumes unacceptably high.

    As part of my book research, an SAP customer sent me this

    “HANA is the new “UNIX”. Big iron, expensive and niche. H/W vendors have to make their margin somewhere. They certainly can’t run a business selling 2 socket servers. Some time ago SAP did have suggested hardware pricing on their website, but it caused a revolt with their “partners” and they took it down. Attached is a copy of the cached version.”

    My suggestion – Either because of SAP’s lack of proper controls or your customer inability to manage your partners SAP not to continue to expect the free market to have the impact it does in other sectors. With HANA, SAP should more aggressively manage its ecosystem.