Latest

The Design Thinking Approach

Design Thinking

Design thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.” – Tim Brown | President and CEO, IDEO

Today evening, I attended a very inspiring talk titled ‘Trends in Innovation and Knowledge’ delivered by Dr. Raomal Perera, on innovation, creativity, and particularly the design thinking process. It was a short and succint talk touching upon how the economic center of gravity has traversed the globe from past to present, the mechanics of an IDEO team who innovated and  improved the design of a shopping cart, and the design thinking process. This post is mainly about the key takeaway from the session (design thinking), interwoven with a little bit of my own insights, learnings, and musings.

Design thinking, both the philosophy and the process, has a rich history as I found out at this article in medium. It is a human centric problem solving and innovation approach, which is creative, effective, and applicable to many varied fields and industries. During the talk, Dr. Raomal discussed and conveyed the concept that ‘design’ is not a product, or artifact, or aesthetics, nor document/diagram, but a process. This brought to my mind (coming from a software engineering background) a quote from a recent book I read by James Coplien titled ‘Lean Architecture for Agile Software Development’. That particular quote goes as follows (emphasis added by me):

You might ask: Where does (software) architecure start and end? To answer that, we need to answer: What is architecure? Architecture isn’t a sub-process, but a product of a process – a process called design. Design is the act of solving a problem” (Lean Architecture for Agile Software Development, James Coplien – 2010)

To me, many of the ideas in the design thinking process resonates with the idealogy and outlook that James Coplien has towards building really great software systems. His many talks on capturing the human mental model in code, and his formulation of the DCI (Data, Context, and Interaction) architecure, is testament to that opinion. The design thinking process and approach, used correctly, would be a game changer in any industry, domain, field, and academia. More so, in the software services domain, whose core objective could basically be summed up as eradicating end user pain points and enhacing their experience through value addition.

Of course as Dr. Raomal mentioned at the end of his talk, in order for any creativity, innovation, and seeds of design thinking to take root in a company or enterprise, there should be a high level of support and fostering in terms of culture, habitability, and resources (and in this context resources does not always equate to R&D budgets). But done right, the design thinking approach will produce amazing products, services, and systems, that go beyond end users expectations, and solves their problems in some of the most innovative and creative ways possible.

The video at this link  contains a short but informative high level overview on what design thinking is all about. For a short explanation of design thinking process and components, the video here does a great job in my opnion. Finally, for those who want to delve deeper into the design thinking process and related activities, take a look at this Stanford webinar

Advertisements

The Microsoft Ecosystem for Artificial Intelligence and Machine Learning

postHeader

The dawn of artificial intelligence… hopefully the human race will map a date and place to it somewhere down the line in the future, if we survive the technological singularity that is… But then again, the technological singularity is (was?) a hypothesis, and a hypothesis is just that: a hypothesis.

Techno satire aside, the dawn of artificial intelligence is still yet to see the light of day (no pun intented). We are still progressing in the research of AI, and the difficulty is in the word ‘Intelligence’… it is a rather large bucket into which many definitions are thrown, and each research milestone that meets a particular criteria or definition will announce that AI is progressing the way it should. The positive aspect of this fragmentation and branching out, is that it has given root to so many of the advances we see in the world around us, from self driving cars, to personal assistants, to adaptive machines learning from the content we humans have produced.

giphy.gif

The history of traditional AI research had its beginnings in the summer of 1956, at a Dartmouth workshop titled “Dartmouth Summer Research Project on Artificial Intelligence”. Most of the attendees there, would go on to become the leaders and experts in the field of AI. The workshop was not a very formal one, leaning more towards discussions and talks between individuals and groups, in the main math classrooom at the top floor of the college. Some  of the outcomes of the workshop were symbolic methods, early ideas for expert systems, and ideas on deductive vs inductive systems. Of course, the advent of digital computers at around the same time (actually, earlier) was a great catalyst and platform to implement many of the ideas of AI research in practice. But it was here that many found out, creating the general AI everyone envisaged was not so easy.

Today, there is a major shift in AI research and trends. The areas of machine learning and deep learning networks have sprung up, and coupled with the power offered by cloud computing, has made major strides into areas previously unimagined. And unlike the past decades where AI was driven by the scientific community, the current paradigm shift is steered by enterprises and consumers, who are digitizing their everyday lives and experiences. Satya Nadella, the current CEO of Microsoft, made the following statement in 2016, at the world economic forum held in Switzerland:

This new era of AI is driven by the combination of almost limitless computing power in the cloud, the digitization of our world, and breakthroughs in how computers can use this information to learn and reason much like people do. What some are calling the Fourth Industrial Revolution is being accelerated by these advancements in AI, and this means that all companies are becoming digital companies — or will soon be.

This is an important statement that gives meaning and validity to the advances Microsoft has made in its own cloud platform and stack, Azure. The ‘new era’ of AI hinges on the fact that cloud computing has empowered the enterprise, science, technology, government, and society, to undergo the ‘Fourth Industrial Revolution”. In this regard, Microsoft has made many advances, and set the stage for some amazing ways of learning, utilizing, and working with AI and machine learning.

SNQuote.png

At times, the difference between AI, machine learning, and deep learning is a bit fuzzy. Historically speaking, AI gave rise to machine learning, and machine learning in turn gave rise to deep learning. The rise of machine learning was a shift in thinking, of how machines could learn from existing data, and adapt or change their behavior. This differs from traditional (enterprise) AI containing hard-coded heuristics, which were essentially knowledge based systems with well defined inference rules.

Deep learning is the new kid on the block, a sub-field of machine learning that is trying to work with algorithms inspired by how the human brain functions. It revived the study of artificial neural networks (which was lying dormant for a decade or two) because now we have the computing power (cloud computing again) to work with millions of neural networks meshed into a “Deep network”, and mimic how the human brain learns, better than any technology has ever done before.

In the context of enterprises and consumer technology, traditional AI systems made the human being a decision maker in the value chain. Machine learning systems can learn from existing data and predict results, events, patterns, and objectives, but humans still do take on the role of applying prescriptive actions based on these results. For exampe, in credit card fraud detection using machine learning, suspicious activity can be inferred using transactional data, but human actors need to take the next steps in terms of what to do based on that information. A major goal of machine learning is for machines to learn and apply prescriptive actions, removing the human altogether from the equation (see image below).

Coming back to the cloud technology stack and ecosystem provided by Microsoft for AI, machine learning, and deep networks, the latest cutting edge offering is the Cortana Intelligence Suite (CIS). This is a collection of cloud technologies and tools, that enables anyone to build end to end advanced analytics systems and pipelines. The short (but comprehensive) channel9 video at this link by Buck Woody from the machine learning and data science team at Microsoft, gives a great overview of the CIS stack and its components. The below image also shows the individual components of CIS, and the general direction of data flow:

cisbigpic

Source: biz-excellence.com/2015/10/12/cortana-intelligence-suite/

The components of the CIS stack are as follows (given below in no specific order, but the list roughly starts from where data is consumed, and finishes where informative decisions are made or automated):

  • Azure Data Catalog – This is a metadata repository provided for all kinds of data sources, which is available for discovery by users. Whether on-premise or off, any data that is discoverable will be tagged and labeled here, and provide users with easy access to locating data, and remove any duplicate effort related to data collection.
  • Azure Data Factory – The data factory is used to orchestrate data movement. Data from disparate sources and systems need to be moved around in the pipeline or advanced analytics system being implemented, and the data factory helps move around all this data within the system wherever it is required.
  • Azure Event Hub – This is a very high performant telemetry ingestion service ,which can be used to consume and store event data from millions of events generated by IOT devices, sensors, and other hardware or software components which raise events of very large volume, variety, and velocity.
  • Azure Data Lake – The data lake is used to store very large amounts of structured, semi-structured, and/or unstructured data. The data lake is composed of two parts: the data store, and a query mechanism whereby many languages can be used to query the ‘lake’  of data available. Data lakes are usually used to store mega large volumes of data in their raw native format, till they are needed by a system or a process.
  • Azure SQL DBData Warehouse, DocumentDB – These are all used to store and relate the data in a system, with Azure SQL DB being the usual MSSQL offering on Azure. Azure DW is a cloud data warehouse solution which is highly scalable. Azure DocumentDB is the NoSQL database hosted on Azure, and is similar to other document based databases, but offers higher performance and scalability. A great case study of DocumentDB is when Next Games used it as the backend for their mobile game ‘The Walking Dead – No mans land’.
  • Azure Machine Learning, Microsoft R Server – Azure ML and R Server, are the environments where the learning takes place. Predictive analytics and models are created using the data available, catering to many different consumers. Keep an eye out, as I will be posting more articles on machine learning using Azure ML studio.
  • Azure HDInsight – This is Microsoft’s Hadoop infrastructure implementation, which is Apache compliant, with loads of functionality and features added on top of it. If you are thinking Big Data, this is what you need.
  • Azure Stream Analytics – Sometimes, the data that we need to analyze is produced realtime, and has a very high production velocity. It may be the case that this kind of data needs to be analyzed in realtime (or near-realtime), as and when it is created, and this is where stream analytics comes in. In scenarios where IOT devices, sensors, vehicles, etc. are producers of high velocity data which needs to be processed and analyzed in real time, streaming analytics can provide endpoints for consuming and analyzing this data.
  • PowerBI – This is Microsoft’s visualization tool. This is where data comes in and you can get rich visualizations out. Available as services for being consumed by a variety of clients, and also offered as a desktop client and app for mobile devices.
  • Cortana, Cognitive Services, Bot Framework – These are programming environments exposing services which can be used to build interfaces where human computer interaction takes place. Here we would like to ‘talk’ to the system and get insight, answers, predictions, and suggestions, from all the data processing we did in the pipeline. Cortana is the well known digital assistant in windows devices, but also it is an API which you can program against. Cognitive services are a set of APIs ranging from text, image, and speech recognition, to translation and content recommendation, and much more. Finally, the Bot Framework is a great tool to build interative and intelligent bots, that can help and converse similar to a human being interacting with you.

So, in a nutshell, that’s the spectrum of technology and tools you would be working with in the CIS stack. Of course, you are not required to use each and everything listed above. Rather, you would select the tools that work best for your requirement, and build your customized implementation accordingly.

Imagine a scenario, where the executives in your enterprise task you with building a complicated advanced analytics system, in order to better leverage business intelligence and predictive analytics for improving all facets and units of the business. Now imagine doing it using in-house software and hardware, purchasing tools and licenses that will cost thousands of dollars, and programming from scratch to utilize in-premise nodes and clusters to do powerful computing and number crunching.

If you look at the cost and effort associated, and compare it with the fact that I can login right now to Azure ML studio for free (using my msn id), and build a predictive analytics model and host it as a web service within a few minutes to an hour or two, you can see how powerful and cost effective the CIS really is. Of course a real world enterprise requirement would take much more time than that (my example was overly simple), but it would still be far more cost effective and simple to use CIS and focus on the (business) matter at hand, rather than dealing with the mechanics, internals, algorithms, data pipelines, storage, VMs, and all the bugs of a custom solution build in-premise from scratch.

Keep an eye out as I will be posting more articles in the upcoming weeks,  dealing with building end to end analytics solutions using the Cortana Intelligence Suite, machine learning models with Azure ML studio, and concepts/algorithms in machine learning and deep nets.

Engineer or Programmer? The (non existent) Existential Dilemma…

EVERYTHING

This article describes my opinion of the term and title ‘software engineer’ and the implications behind it, and how it ties (or should tie in) to the viewpoints of the engineering discipline at large. (Blog header illustration created using canva)

I very rarely spend attention on philosophical debates relating to software development, but last week I was reading an article I came across at ‘The Atlantic’, which prompted a train of thought, including this blog post. I suggest you read it before continuing, as it has a very interesting premise on why programmers should not call themselves (software) engineers. I neither agree nor disagree with the author or the article, but it had some very interesting ideas, that motivated me to document my thoughts and opinions in this post…

First, a bit about the term ‘software engineering’ from my own perspective and understanding. Throughout my career, I used to work with people who were (are) referred to as software engineers, themselves being graduates of computer science, information systems, or software engineering, the titles being appropriated to the name of the undergraduate degree that each followed in his or her academic journey. I myself hold a degree in ‘software engineering’ issued by a well known university situated around Regent street, north-west of central London in the United Kingdom. Therefore, the term or title ‘software engineer’ was something I was (and am) quite comfortable with, whether in how I introduce myself, or how I refer to my colleagues.

On the origins of the term ‘software engineering’, the article in question quotes a fact that is commonly taught in the industry and field of software development today, which is that the term ‘software engineering’ was first deliberately used in the 1968 Garmisch-Partenkirchen conference organized by NATO. It is said that the term was coined provocatively, in a bid to challenge software development at the time to align with the rigid processes and methods followed by the established branches of engineering. But it was interesting for me to come across an article by Bertrand Meyer, that provides some evidence and basis that the term was used at least two years prior to the NATO conference, and in positive light, indicating (at the time) that software development should be considered an engineering discipline in its own right.

The established engineering disciplines, are coined as ‘established’ based on the rigid processes, regulatory bodies, certifications, licenses, continuous learning, and ethical codes they follow. I am quite firm in my understanding that this is a good thing. But some of these aspects came about mainly due to the problems and engineering disasters that were prevalent in the 19th and 20th centuries, and saw the need to bring in standards, ethics, regulations, certifications and the rest. There is always a scenario which prompts the need for better processes, regulations, and policies. This was widely characterized in the software development world in the last two decades, where a myriad of development philosophies and approaches were invented. Even Robert C. Martin discusses about the situation, and tries to convey the message that if software developers (engineers?) don’t start getting there act together, we will be imprisoned by regulations and policies dictating how we should develop software.

If the practice and business of developing software is expected to evolve into an (established) engineering discipline, the progress will always be compared to the other mature engineering disciplines. This outlook and comparision was not favoured by some, which resulted in a large number of software development circles stating that software is more art/craft/science than engineering, and that there are better ways to build software than using rigid engineering processes. This viewpoint is widely respected and quite popular in the software development world today. On this side of the wall, software development should be lean, agile, and dynamic, without heavy weight engineering processes and regulations retarding these ideas, which is practical for all sense and purposes. Software is an artifact and material unlike anything that traditional engineers are used to working with, and early lessons taught us that building software  to specifications with rigid processes containing blueprints and schematics was not the best way to go about things. But the problem was (and still is) that the computing infrastructure (used by the public, consumers, clients, business users etc.) which is the end product of software development, still requires the same rigidity, reliability, safety, and conformance that a bridge or building built by a civil engineer should have. This is expected by a society which functions daily on the trust and understanding placed on the infrastructure around them, built by engineers. And therein lies the disdain towards the software development world (mostly by the engineering community, I suppose), where systems go wrong, bugs seem to evolve and reproduce, and software development is an activity that seems random and haphazard. The engineering world has its fair share of problems and disasters, just as there are software project failures and disasters. But a crucial difference is that the engineering disciplines will emphasize and priorotize the regulations, standards, safety to public health, and ethics, even if failing to deliver.

Just as engineering has foundations in the natural sciences, software ‘engineering’ too, has its origins in philosophy, mathematical logic, and various other branches of science, that has no direct correlation to what we do today as developers of software. The early proponents and pioneers who set the stage for software development to be a wide spread phenomenon, were computer scientists, mathematicians, particle physisicts, and information theorists. Not engineers per se. I was searching online and came across the professional engineers ontario website, where the definition of a professional engineering was formally stated. Based on my reading, I specifically chose canada as they are one of the countries who are extremely strict with the title ‘engineer’ and who is licensed to carry it. I would like to directly quote the definition of the practice of engineering, taken from their website:

Professional engineering is:

  1. any act of planning, designing, composing, evaluating, advising, reporting, directing or supervising (or the managing of any such act);

  2. that requires the application of engineering principles; and

  3. concerns the safeguarding of life, health, property, economic interests, the public welfare or the environment, or the managing of any such act.

Rings a bell? At least for me it did. This seems to sum up what I have been doing for the past few years in my tenure as a software engineer. In a sense, what’s missing is the regulatory bodies, the policies, licenses, code of ethics, and the rest of the bells and whistles.

Let me summarize (thank you, if you are still reading) and propose an approach towards building better software, gaining the respect of traditional ‘mature’ branches of engineering, and proving ourselves worthy of the title ‘software engineer’.

  • First, I lied about software development not having a code of ethics: Software engineering does have a code of ethics known as the IEEE CS/ACM Code of Ethics and Professional Practice, but no regulatory body to enforce it. It would be a great start to read this, and attempt to see how applicable it is in our everyday work. We may not have regulatory bodies, policies or practice licences etc. but that is not an excuse for not committing to the best standards and practices, and/or not being ethical. In the popular book Content Inc, one of the main (controversial) ideas conveyed by the author is that you should build an audience before you figure out what it is you want to sell them. Ours is a somewhat similar situation, where we need to build our audience (society expecting traditional engineering recipes) and figure out what to sell them (trust and reliance in software development using tailored/non-traditional engineering processes, backed by ethics and a sense of responsibility).
  • Second, the use of sound engineering principles: The design and architecure of the software intensive systems we build, the practices and guidelines followed when coding, the project management and costing aspects, the control and quality checks, all rely on the application of sound engineering principles in the construction of software. True, there is no universal standard or guideline dictating how we go about doing this (in fact there are a great many variety of approaches and methodologies), but whatever we tradeoff or cut-down on, it would be advisable to always reflect and check whether whatever we are doing can still be comfortably called ‘applying sound engineering principles’, without any hint of doubt in our mind.
  • Third, continuous learning and certifications: The software development world is plagued with people who are too busy learning and have no time to work, or are too busy working and have no time to learn. Jokes aside, continuous learning is something we developers are familiar with, due to the evolving trends and technology waves that tow us in their wake. There is no one person or body to standardize it and tell us the how, why, what, and when to learn, but we have ample certification programs to showcase to employers how good we are. Maybe we should have that same pride and attitude, in showing that we are capable of building great software, and that our certifications and achievements are a testament to our responsibility and obligations towards our stakeholders as well.

That was not a short summary, and I apologize for that. Looking back, this post is not a negative response to the original article at ‘The Atlantic’, but instead I was interested in critically analyzing and identifying the weaknesses in software engineering, which is constantly being targeted by traditional engineering disciplines. That being said, in ending, a big salute to all the software engineers, developers, and programmers, who constantly and daily contribute in advancing and moving the field of software engineering, towards being a mature and well respected discipline and industry.

A Short List of My Favourite Google Chrome Extensions and Apps

The web browser started out as simple program that allowed access to HTML content. Today, it has become a portal for media content, real time communication, and an extensible hub with 3rd party developed functionality. The history of web browsers is very interesting, starting from the WorldWideWeb browser created by Tim Berners-Lee which was initially used at CERN. This blog post details a few of my favorite chrome extensions and apps, which I use more or less on a daily basis.

Momentum

momentumNewTab.png

Momentum replaces the content in a new tab page in chrome with a personalized dashboard, and some very stunning background visuals. It contains a personalized message,  weather information, favorite links, and a to do list.

Chrome Web Store Launcher

chromeWebStoreLauncher.png

The chrome web store launcher enables you to manage and launch your installed chrome apps, as well as search for new apps. It is also possible to define five favorite applications that come up on top for easy access.

Dark Reader

darkreader

The dark reader extension is very interesting and useful, as it does not just simply invert the colors of the web pages displayed in the browser for easy viewing. Instead, it provides a set of filter and font options, which can be tweaked in order to get an optimal viewing experience that is easy on the eyes.

Turn Off The Lights

turnoffthelights

The turn off the lights extension does exactly what it says: dims the whole web page except the video you are watching. This extension is a simple but very effective tool for giving a somewhat cinematic experience to the videos that you watch, and also helps to focus on the video content.

PDF Viewer

pdfjs

The PDF Viewer extension provides a really cool and lightweight PDF reader right within chrome. I’ve found out that from the time I installed the extension, I have done almost all of my reading in chrome. The main winning points for me was the rendering/scrolling speed, and the ability to navigate between reading and looking up web references from within the browser itself.

Gliffy Diagrams

gliffyapp

The Gliffy Diagrams app is a great tool for all manner of diagramming requirements, from flowcharts to UML views. It has a great collection of diagram types, and some nice themes that can change the look and feel and the colors of the diagrams created.

Postman

postman

Postman is a really great Web/REST API client, where you can test your API with HTTP requests. That being said, it is quite a function heavy app, where you can see request-response history, create custom scripts, and document your API. If your work touches on building HTTP APIs, this tool is a must.

Pixabay

pixabay.png

Pixabay is a simple app which launches the pixabay site where all the images and videos are available for free. They are released from copyright under the creative commons, and can be used royalty free. A great source of free content for media required by designers and content creators.

WordPress

wordpress.png

And finally, the WordPress app which I use to create all my blog posts (including the current one). This app is a very intuitive program, with a clean and neat interface which is a pleasure to use. Compared against logging into the wordpress admin portal to create blog articles, the wordpress app is a lifesaver.

The above list of chrome extensions and applications are some that I use the most, and maximizes browser capability for increased productivity in my work and research. What tools, extensions, and apps do you use for work, play, or study? It would be great to hear about your recommendations and favorite extensions, in the comments section below…

HoloLens Development using Visual Studio 2015 and Unity HoloLens Technical Preview

hla

[image credits]

Last week I did a session at Dev Day 2016 on the topic “Developing for the HoloLens”, and this post is a tutorial that is complimentary to the session I did, demonstrating the steps of how a simple holographic application can be built and deployed in the HoloLens emulator, using the tools provided by Microsoft and Unity…

Contents:

  1. What Are Mixed Reality Applications?
  2. Software and Hardware Requirements for HoloLens Development
  3. Developing 3D Holograms using Unity HoloLens Technical Preview
  4. Building and Generating the VS2015 solution from Unity
  5. Deploying the application in the HoloLens Emulator
  6. Useful Links and resources

What Are Mixed Reality Applications?

In general everyone would have quite a good idea of what Virtual Reality (VR) and Augmented Reality (AR) is all about, given the AR/VR devices that have proliferated the market in recent times, plus the large number of articles on AR and VR that is available on the technical new sites, blog articles, and vendor sites out on the internet. With the introduction of the HoloLens, Microsoft created one of the most ground breaking and disruptive innovations in display technology, which introduces a new term called “Mixed Reality” into the mix (no pun intended). Virtual, augmented, and mixed reality are easy ideas to grasp if you see what they try to offer in terms of how we see the world:

  • Virtual Reality offers the user an immersive experience, where “presence” is the key factor. The external (real) world is shut off and the virtual world takes over the user and his senses. Interestingly, these type of applications and virtual spaces have been researched from some time back (including being heavily portrayed in early sci-fi movies and books) as evidenced by this early game called ‘Dactyl Nightmare’ from the early 90’s.
  • Augmented Reality is all about overlaying the real world with graphics and animations which serve to be informative or otherwise entertaining. They could range from scientific application used out in the field, to mapping technology used by everyday people for driving or cycling. Examples range from map overlays, directional indicators and animations over roads and buildings when looked at through phones, and games like Pokemon Go.
  • Mixed Reality is a concept brought into the mainstream recently, in a large sense by the introduction of the Microsoft HoloLens. This is a new way of looking at the world because unlike VR, you can still see the real world around you. And unlike AR, a user is given the feeling that any digital artifact (Hologram) that he is seeing, is part of the real world. This is mainly due to the fact that the Holograms seen through the HoloLens, become part of the real world around you, responding to the solid objects and surfaces in your room, responding to touch and sound, and behaving as if they are part of the real world.

vamrealitytypes

[image credits]

With regards to the HoloLens device itself, I will not go into any detail on the specifications and what it is capable of… You can find a lot of its capabilities in the video at this link. Similar to the session that I did on Dev Day 2016 last week, we will take a developers perspective and see what we need know about setting up a development environment for programming the HoloLens. After that, we will dive into building and deploying a simple HoloLens application, using (free) tools provided by Microsoft and Unity.

Software And Hardware Requirements For HoloLens Development

The HoloLens development tools should be installed correctly exactly as instructed at this link.

The following tools provided by Microsoft and Unity need to be installed in your development system. Unity is a very popular cross platform game engine used to create games and interactive content for many platforms and devices. You can find many resources online for learning Unity. The Unity HoloLens Technical Preview is a build of Unity, that is implemented specifically to help create content for the HoloLens.

When installing Visual Studio 2015 Update 3 (or if you have it already installed in your system) one important thing to note is that the below two nodes are selected in the feature selection during installation:

vsinstallconfig

If the “Tools” and “Windows 10 SDK” are not selected under the Universal Windows App Development Tools node, make sure you select them. In case you have Visual Studio 2015 Update 3 already installed, you can modify the install from add/remove programs and add the above features to the installation.

One particular issue I had was that when I was trying to download the Unity HTP, there were some errors when I was trying to download from the Unity HTP site:

unityhtp

When I clicked the above shown link, sometimes it would download the setup file, and sometimes it would give an error saying the content was not available. In cases where the setup file was downloaded, when running the setup, i would get an error saying “no internet connection” midway through the install. These issues seem to be resolved when I checked again at the time of writing this article. But just in case anyone else is facing similar problems, just click the “Archive” tab, and download a previous beta (which is what I did), which should work without any problems:

unityhtpbeta

Once the Unity HTP setup file is downloaded and run, another (very) important thing to note is to make sure that the “Windows Store .NET Scripting Backend” is selected in the installation feature selection, as shown in below image. If this is not selected, the installation of Unity HTP will not have the proper build configurations to generate the Visual Studio project required for deployment to the HoloLens.

unityhtpinstallconfig

Moving on to the hardware and platform requirements, make sure the development system you will be working on meets the following (minimum) criteria:

  • 8GB of RAM (or more)
  • GPU
    • Should support DirectX 11.0 or later
    • Should support WDDM 1.2 driver or later
  • 64-bit CPU
    • CPU with 4 cores, or multiple CPUs with a total of 4 cores
  • 64-bit Windows 10 OS
    • (Editions: Pro, Enterprise, or Education)
    • Home Edition NOT supported (due to next point)
  • Enable “Hyper-V” in Windows (Not available in Windows Home Edition) from Control Panel > Programs and Features > Turn Windows features on or off
  • BIOS Settings
    • Hardware Assisted Virtualization should be enabled
    • Second Level Address Translation (SLAT) should be enabled
    • Hardware based Data Execution Prevention (DEP) should be enabled

Basically, any decent machine today should meet the above criteria and requirements. The exception would be if you were running Windows Home edition, in which case you might need to consider investing in upgrading the operating system.

Developing 3D Holograms using Unity HoloLens Technical Preview

Ok, now that we have some idea about mixed reality applications, and the basic requirements for a development environment setup, lets fire up the Unity HTP and dive into creating 3D content for our HoloLens application (In each of the steps below, I will walk you through building the scene in Unity, and explain the reasons behind why we select certain settings and configurations).

When you fire up unity (from this point on I will be referring to the Unity HTP as unity), you would be presented with the following window initially (when you run unity for the first time, it will ask you to create a free account if you don’t have one already):unitystart

 Lets click on the new project button, which will take us to the screen shown below:unitynewproject

In the window shown above, enter a project name, select a project location where the project will be saved, and make sure to select the 3D radio button, and click “Create project”. You will be taken to the main workspace of unity, in the new project you have created:

unitywspace

Next, lets configure the main camera for our holographic application. Select the main camera in the left pane, and update its position to be at the origin in the world co-ordinate space by updating the X, Y, and Z values to Zero in the Inspector pane to the right. We will also select the clear flags value to be “Solid Color”:

camerasetting1camerasetting1

The main camera is your window into the 3D world we are creating in unity. If we were designing a first person shooter, and not a HoloLens app, the main camera would be what you see through your monitor when you play the FPS game, so your monitor or screen will be displaying what the main camera can see. In the case of our HoloLens app, the HoloLens device will be worn on the head and our eyes will be the vantage point, therefore the main camera will display scenes (via the HoloLens) into our eyes as if our physical eyes are the window which will display what the main camera can see. This is why we will see 3D holograms merged into the actual physical world or room around us, as if they are part of the natural objects we see normally with our eyes.

Next (making sure the Clear flags setting is set to “Solid Color” as described in the last step), click on the background color to get the color selector window, and set the R, G, B, A values to zero:

colorsetting

Again, if this was a FPS game we were designing, we would have drawn a 3D world to be viewed through the main camera. But in the case of the HoloLens, the device can add light to the existing view you see of your real environment, but it cannot take light away from your eyes (i.e. it cannot display/render black). Anything the HoloLens renders as black will be transparent, which is why we set the color setting as shown above. This will render the whole scene transparent when the application runs and we will be able to see the real world through the HoloLens, plus any holograms we render in the scene.

Next, lets add a 3d object which will serve as our hologram (Holograms can be much more complicated, but for the purpose of this tutorial our 3D cube will serve as a simple hologram). Click on the create drop down in the left pane (hierarchy window) and add a cube. Once added, select the cube in the left pane and set its position to be X = 0, Y = 0, and Z = 5 in order to position the cube 5 meters in front of our eyes (i.e. 5m in front of the main camera). All units of position are in meters, therefore it is very easy to translate the position values with the actual scene we will see via the HoloLens. When viewed through the HoloLens, the cube we just created will be 5m in front of us (placed in the real world), literally.

addcube

cubeadded

Now, lets enhance our 3D cube a bit, because it looks very plain and not very interesting. We will texture our cube and have it rotate perpetually, which is one step better than an inanimate white box. We will first texture our cube. Textures are basically image files that can be used to ‘skin’ our 3D objects. If you do a search online for ‘free textures for 3D meshes’ you would be able to download a lot of free textures. The unity store itself has a lot of free textures you can use. I will use a texture that I have which is an image file (WoodTexture.jpg) of wood like material. just open the folder where you have your textures (i.e. image files) and drag the required texture to the assets pane:

addtexturefile

Next (as shown in below three images), right click inside the assets pane and create a material asset, and name it “CubeMaterial”. Once the material is created, select it in the asset pane, and then drag the texture we added earlier from within the asset pane into the “Albedo” selection property of the CubeMaterial in the inspector pane (you can play around with the metallic and smoothness settings, you will see a preview in the sphere shown in the bottom of the inspector pane). Once that is done, drag the CubeMaterial from within the assets pane and drop it on the cube in the hierarchy pane to the left. This will texture the cube we have in our scene window:

addmaterial

addtexturetomaterial

addmaterialtocube

Now that we have a textured cube/hologram, lets animate it a bit by having it perpetually rotate around a particular axis. We will do this by adding a script which will define the behavior we would like to see. Right click within the assets pane and create a C# script, and name it “RotateCube”. Once created, right click the script and select open, which will open it in visual studio. Delete the Start() method in the script, and add a public instance variable of type float, with the value 20f. Inside the Update() method, add the following statement:


transform.Rotate(Vector3.up, speed * Time.deltaTime);

Don’t worry too much about the syntax, unity has a great scripting API and I recommend that you read through to get a better idea of what is possible. After the modifications to the code, the code should look similar to what is shown in the third screenshot below:
addscript

openscript

modifyscriptinvs

Next, save the modifications in visual studio and exit the IDE. Back in our unity work space, drag the script and drop it on the cube in the hierarchy pane. Once this is done, you can click on the “Game” tab on the work space window and click the play button to see the effect of the script on the cube. You should see a rotating, textured cube. Click the play button again to stop the animation.

addscripttocube
spinningcubescene

Ok, we have just created a textured, rotating 3D hologram which we will build and deployed in the HoloLens emulator using visual studio. This will be explained in the next section. Save the scene we just created by going to File > Save Scene and give a name you prefer, and save the scene.

Building and Generating the VS2015 solution from Unity

Now that our scene is ready, lets tweak some configuration settings required for building and generating the visual studio solution.

First, click on Edit > Project Settings > Quality, and in the inspection pane, set the option below the green windows store application icon to fast:

editqualitymenu

fastsetting

Next, go to File > Build Settings which will bring up the build settings window. Select the “Windows Store” option under platform, and click on the player settings button, which will present some configurations in the inspector pane. In these settings, select “Other Settings” and make sure the “Virtual Reality Supported” checkbox is checked, and that the windows holographic SDK is selected under the virtual reality SDK list (make sure you have selected the Windows Store platform in the build settings window, else you may not see the proper options in the inspector pane when you click player settings):

playersettings

Finally, click on add open scenes button in the build settings window and add the scene we created. Select the SDK option to be “Universal 10” and the UWP Build Type option to be “D3D”. Make sure the unity C# Projects checkbox is checked, and click on Build. This will open a file dialog asking you where to save the generated visual studio solution. Create a new folder within the file dialog called VSapp and click on select folder, which will build the solution and save it in the VSapp folder. Once the build is done and the files are generated, the folder contents will be opened by the file dialog:finalbuildsetting

buildlocation

That is all there is to building and generating the visual studio solution based on our unity scene and holographic objects we created. In the next section which will also be the final step of this tutorial, we will open the generated solution in visual studio and deploy to the HoloLens emulator.

Deploying the application in the HoloLens Emulator

If you’ve followed through the tutorial thus far, great! In this final step, lets actually deploy the HoloLens application in the emulator, using visual studio.

Open up visual studio 2015 in administrator mode (right click VS2015 > run as administrator), and open the solution file that was generated by unity (in the above case we need to navigate into the VSapp folder we created and where the solution was saved). Once the solution is loaded and opened, right click the “Package.appxmanifest” file and click on view code. Modify the value of the “Name” attribute of the TargetDeviceFamily tag to be “Windows.Holographic”, and make sure the MinVersion and MaxVersionTested attribute values are similar to the second screenshot below:

pkgmanifest

xmlmodify

Next, select the release mode, x86 configuration, and the HoloLens emulator from the device list as shown in below screenshot:

slnconfig

Now if we select Debug > Start without debugging from the menu, the Emulator should start up and our application should be deployed and automatically run in the emulator. Note that this procedure will take a bit of time, specially if the emulator is running for the first time:

emulatorstart

appstart

runningapp

You should be able to see the cube we created rotating in the emulator. You can navigate within the scene using the A, W, S, D keys, and look around by right-clicking the mouse and looking inside the emulator rendered scene. You should be able to walk around the spinning cube, look at if from different vantage points, and move away from it. The surrounding environment is all black, but in the real HoloLens device, this will be rendered transparent, and we will be able to see the real room or environment we are physically in, with the spinning cube hovering five meters in front of us.

Useful Links and resources

The following list would help in any further R&D that you would be motivated to conduct on HoloLens development, as much as it was a good source of reference and information for me:

– Development Tools Installation: Link
– Unity HoloLens Technical Preview download: Link
– HoloLens 101: Link
– HoloLens Documentation: Link
– Spatial Mapping: Link

Conclusion…

I hope you enjoyed and found this tutorial useful and motivating. I will be covering more hololens development concepts in depth in future articles such as spatial mapping, emulator settings (specially the device portal option, which is very useful and helpful), input via gaze, gesture, and sound, and maybe some articles on unity (HTP) modelling and scripting as well. Any errata, comments, issues faced when following this tutorial, and general feedback, is most welcome in the comments section below.

The Toshiba MSX and 64Kb of RAM – A Trip Down Memory Lane

When I was young, I grew up surrounded by old computing (and electronics) magazines, issues from the 60’s, 70’s, and 80’s, a majority of them being the popular publication “creative computing”. I used to read them (and re-read them) voraciously during a time period where people around me were working on PCs running Windows 3.1, which were just emerging into schools, offices, and to a somewhat rare extent, homes. I remember being quite an antiquarian when it came to these old computing magazines and publications, but I enjoyed reading and learning about how the very first computers looked and operated, about the DEC PDP-11, the Macintosh, the IBM PC with all the Charlie Chaplin adverts, and how the landscape of personal computing was constantly evolving to become what it is today…

image credits: http://www.criticalcommons.org/Members/ccManager/clips/ibm-modern-times-ad/view

 

Creative_Computing_1985_04_V11_N04-750

image credits: http://www.pcmuseum.ca/details.asp?id=40979&type=Magazine

 

102640962-05-02

image credits: http://www.computerhistory.org/revolution/personal-computers/17/303/1201

 

microsoft ad

image credits: http://www.alticoadvisors.com/Blog/tabid/183/id/441/Friday-Funday-Blast-from-the-Past-Vintage-Microsoft-Ads.aspx

When remembering all those old computer magazines and articles, what I really miss and remain nostalgic about, is my dads Toshiba MSX home computer which he had bought during the 80’s and was the first ever computer that I learned to program on. I was introduced to programming at a very young age, as I was always curious about this machine that my dad used to work on, where the display was a television set connected via RF cable and programs were stored and loaded on audio cassette tapes. The Toshiba MSX home computer was first announced by Microsoft in 1983, and conceived by the then vice-president of Microsoft Japan, Kazuhiko Nishi. The MSX computer was quite famous for being an architecture that major Japanese game corporations wrote games for, before the Nintendo era (the first version of metal gear was written for the MSX architecture).

Kazuhiko Nishi and Bill Gates in 1978:

tumblr_mybab42jF01rpt0ijo2_400

image credits: http://f-ckyeahbillgates.tumblr.com/post/73515656034/jara80-kazuhiko-nishi-and-bill-gates-in-1978

The MSX HX-10 model which my dad owned was an amazingly simple machine by today’s standards, having no internal storage or hard drive, instead relying on an external audio cassette recorder to load and store programs on cassette tape (there was a cartridge slot, but audio cassette tapes were widely used as they were a cost effective medium, with plenty available). The HX-10 model was based on a Zilog-Z80 processor and came with 64Kb of RAM (user space was about half that, giving about 28Kb for the user with the rest used by the system), which was basically what you had to work with. The machine used a version of the BASIC programming language known as MSX BASIC, which came pre-installed in the ROM.

A Toshiba MSX HX-10 model with packing and user manuals (the cartridge slot is on the upper right of the machine):

IMG_7065

image credits: http://www.nightfallcrew.com/07/11/2010/toshiba-msx-home-computer-hx-10/?lang=it

 

The rear of the HX-10 (You can see where the television RF cable and audio cassette recorder connect to):

i3i14k

image credits: http://www.amibay.com/showthread.php?48508-Toshiba-MSX-boxed-joystick-lightpen-and-games-(looks-NEW)

 

A cassette recorder of the type used to save and load programs for the HX-10:

hx-10_cassette_recorder_open_20140731_1558379878

image credits: http://www.retrocom.org/index.php/gallery/gallery-toshiba/hx-10-cassette-recorder-open-304#joomimg

 

What you see on the television screen once the MSX fires up:

IMG_7085

image credits: http://www.nightfallcrew.com/07/11/2010/toshiba-msx-home-computer-hx-10/?lang=it

 

An early game for the MSX home computer (Blockade Runner):

Blockade_Runner_-_1984_-_Toshiba-EMI_Ltd.

image credits: http://gamesdbase.com/game/msx/blockade-runner.aspx

My dad used to program a lot as a hobby, and taught me a lot about coding and computer internals using the MSX HX-10 machine we had at home. Nowadays, I look back and remember my dad learning and coding certain sprite based games and programs in (Z80) assembly, as the graphics processing was quite slow when programmed using plain MSX BASIC. Programming in assembly language is generally considered a time consuming, complicated, and unnecessary feat today in the developer community, but back then it was pretty much the norm, as it was the only way to write games with fast and smooth graphics on machines like the MSX HX-10. On top of that, there was no internet to search for solutions when stuck on some problem, all references and troubleshooting were done through books and manuals:

z80capture

image credits: https://archive.org/details/Z-80_Assembly_Language_Programming_1979_Leventhal

 

PRODPIC-24550

image credits: http://www.computinghistory.org.uk/det/24550/Getting%20the%20Best%20fron%20your%20MSX/

 

PRODPIC-15259

image credits: http://www.computinghistory.org.uk/det/15259/The%20Msx%20Games%20Book/

 

PRODPIC-15261

image credits: http://www.computinghistory.org.uk/det/15261/Useful%20Utilities%20for%20Your%20MSX/

Looking back, it was a very interesting and influential time period for me in life where I was exposed to the technicalities of how computers worked, and got so used to working with the MSX HX-10 that it was a strange feeling when working on a Windows (v3.1 and up) machine for the first time (I would assume it’s simpler to switch from a terminal based 80’s machine to the Windows PC with its slick GUI and internal/external drives at the time, but trust me, I was still used to the whirring tape recorder and the television set as a monitor for a long time).

Anyway, computing today has evolved to the point where (at times) fundamental concepts of how computers work, and simple programming fundamentals, are considered too low level or unnecessary. This is one symptom of the many abstractions, tools, and software that have been built layer by layer on top of the computing machine over the years, which has made it incredibly easy for everyone to use computers, but maybe not understand them well. This is a far different picture from the home computer owners of the 80’s, a majority of whom had to learn how the machine worked and programmed it on their own in order to use it, and even more contrasting from the 70’s, during the days of the Altair 8800 for example, which had to be programmed using only switches. In my case, it was a rewarding experience to have learnt a lot about computers and programming at a very young age, which is a main reason as to what I am doing today in terms of my career. In spite of all the new hardware advances, latest programming languages, software tools/applications, and computing abstractions (all of which I love, and work with everyday), I am still drawn to vintage computers, old programming languages, and historical software systems, in a large part due to my experience with the Toshiba MSX during my younger days. That part of my life and experience is something I can always look back on, and count myself lucky to have been through, and is something which inspires and motivates (and continues to motivate) my love of technology, computers, and coding.

Seveneves by Neal Stephenson

In my (humble) opinion, when it comes to intellectually stimulating and hardcore cyberpunk science fiction, there is no better author than Neal Stephenson, who does a great job in each and every one of his novels to date. For anyone who likes vast concepts steeped in technical, scientific, philosophical, and mathematical landscapes, portrayed in even more vast settings within (science) fiction, Neal Stephenson presents just that, taking your brain on an intellectual roller coaster ride each time you read one of his books. The very first book I read was Cryptonomicon, (a copy which belonged to my uncle) and it blew my mind. I also went on to read two more of his novels, Anathem, and Reamde, which showcased the ability of the author to create unimaginably different worlds and settings from book to book. For instance, Anathem is speculative fiction featured in a monastic setting (with a theme revolving around the many worlds interpretation of quantum mechanics) which has deep philosophical implications, whereas Reamde is a fast paced techno thriller revolving around a MMORPG with crypto-currencies, social networking, and hacker culture thrown into the mix.

Coming back to why I wrote this post, Seveneves is a novel by Neal Stephenson published in 2015, which was also recommended by Bill Gates who says it is the first science fiction book he has read in a decade. It is a work of speculative fiction, and starts off with a major catastrophe brought on by the destruction of the moon, that triggers a series of apocalyptic events. The story spans from the time this event takes place, to thousands of years in the future, which in itself, requires the reader to digest a vast time range of activities pertaining to the human will for survival. Personally for me, it is a book with an interesting premise that evokes a lot of thought and insight.

My advice on reading books by Neal Stephenson: pick a book that has a theme interesting to you, and see if you like his style and presentation (You might learn a lot about Mathematics, Cryptography, The Enigma machine, Van Eck phreaking, UNIX, and the history of world war two just by reading one book, which is what happened when I finished reading Cryptonomicon).