A history of concurrent programming and Actor based frameworks – Part 1

In today’s landscape of high end parallel processing technology, large scale distributed systems, and the various high level hardware and software solutions that address the problems in software concurrency, it may not be an easy task to have a holistic view of the fundamentals of concurrency problems. This is largely by virtue of the many technologies that exist to solve the problems caused by the said topic, as well as their trends and adoption rates which sometimes does not allow much insight into the fundamentals and concepts of the problem itself.

This post is the first of a series of blog posts that take a look at the historical origins and evolution of handling software concurrency, as well as the Actor model – a mathematical model of concurrent computing put forward in the 1970s – which in recent years, has spawned the popularity of Actor based frameworks for implementing concurrent, resilient,  and distributed applications.

Early Origins

The term ‘concurrency’ and many of the problems associated with it, existed long before computers were invented, in the world of railroad lines, telegraph offices, and many other similar-era developments. Despite not being in any way associated yet with the computational problems faced in the last few decades, the early understanding of concurrency predominantly made headway under the then problems of rail line operation parallelization, synchronizing and sending multiple signals via telegraph lines, and similar operations in the many other sectors coming up in the age of the industrial revolution, that required operations and processes to be handled in parallel. Bring to mind, a late 18th century factory (maybe using punched card machines – the 1890 US census used a tabulating machine operating on punched cards) having workers and machinery at different stages and production lines churning out finished goods, and you could intuit as to how concurrency would have been a major consideration even during that period in time.

In any case, what we intuitively understand as concurrency problems became quite a complicated concept with the advent of computers. The first generation of machines were invented, and worked as ordained by the theories of computation laid down by Kurt Gödel, Alan Turing, Alonzo Church, and John von Neumann. The first generation vacuum tube behemoths ran their instructions sequentially and churned out results to humans, who for the most part were relieved that there was a means of automating calculations and processes that they would have otherwise done manually. The second and third generation machines, with their transistors and integrated circuit innards, created large waves of change in the computing world (by and large enabling microcomputers to be inexpensive and accessible by many), but the basic software instructions and processing model remained sequential – feeding a single processor – similar to how a first generation machine would have been programmed (to be fair, attempts at building super computers using parallelization were undertaken during this time. One such early example is the ILLIAC IV).

Turing Machines, Push Down Automata, and Finite State Machines

In order to lay down some sort of foundation, and to better appreciate the observations and content in the next few sections of this post, we divert our attention to a quick overview of abstract models of computations known as Turing machines, Push Down Automata, and Finite State Machines. Abstract machines are the foundations of modern computers. Mathemetical models of computation laid the foundations for computers and programming languages, and continue to explore the possibilities and limitations of what computers are capable of.

A finite state automata (or finite state machine – FSM) is an abstract and mathematical model of computation. Very simply put, it models an abstract machine that receives input in the form of a sequence of symbols, and can be only in one state at any given time. Changing from one state to another is known as a transition, and occurs based on the input symbol read from the sequence. From a global perspective of a FSM, the abstract machine only knows the current state it is in, and there is no concept of memory that can hold a list of previous states. Shown below is a rather contrived example of a FSM with three states – A, B, and C.


Finite State Machine with states A, B, and C

State A, where the arrow is pointing into, is known as the starting or initial state. State C is known as the accepting state, denoted by the double circle. The arrows between the states as well as self referencing from the state, are known as transitions. The transitions are labelled with symbols, denoting the input symbol value that will allow the transition. In the above case there are three states and a set of two input symbols {0, 1}, therefore the FSM is called a three state two symbol FSM.

Given this information, it is very easy to see what will happen if an input sequence such as 1011 is fed into the above FSM. The very initial state will be the starting state A, and from here, we read the first symbol of the input sequence 1011, which is 1. This invokes the FSM to transition state to B, and now we are in state B. The next symbol we read in is 0, which means there will be a transition from B back to A. Likewise, the next symbol is 1, which means the FSM will transition to state B, and the final symbol is 1, whereby the state will transition from B to C.

The important point to note here is, there can be arbitratily long and complicated sequences of symbols (not limited to binary digits) being read in by the FSM, and when the final symbol of the sequence is read, the FSM could be in state C (the accepting state) or not. If it is in the accepting state after the sequence is completed, the set of symbols is known as a ‘regular language’, and the FSM is said to accept it. vice versa, a language is said to be regular, if there is some FSM that accepts it. This enables us to define that a sequence of symbols such as 1001001001001101 is a regular language to the above FSM, but 100100100100110 is not.

Definition: A regular language is a sequence of symbols that is accepted by an FSM, i.e. if it can terminate in an accepting state.

FSMs can be deterministic or non-deterministic. The FSM we looked at just now is an example of a deterministic FSM, where each state can have one and only one transition for an input. In a non-deterministic FSM, an input could change the current state of the FSM by having one transition, more than on transition, or no transition at all. This brings in the notion of non-determinism to the model of FSMs. Given below is an example of a non-deterministic FSM (NFSM).


Non-deterministic Finite State Machine

If you consider the above finite state machine, you would notice, that when in state A, if the input symbol is 0, there will be two state transitions, one to A itself, and another to B – i.e. there are two possible next states. Also, the above NFSM accepts regular languages with a sequence that ends with 010. If the state is in C, and the input symbol is 1, no state transition occurs.

There is another type of NFSM which uses what is known as ‘epsilon transitions’. In a very simple manner of looking at it, an epsilon transition means that the state change can take place without any input symbol being consumed. This will be clearer with the example of a NFSM that uses epsilon transitions as shown below.


NFSM with Epsilon transitions

In the above NFSM, the transitions labeled with the greek letter epsilon (ε) are the epsilon transitions. State A is the starting state, as well as an accepting state. C and G are accepting states as well. Lets consider what happens when the input set (symbol sequence) consists only of the symbol 0. First, we will be in the starting state A. But as there are two epsilon transitions from A, we will go to the states pointed by these without reading in any input symbol. So we automatically go to states B and H. Now we read in the input symbol, which is 0. In this case only, state H transitions to G, and no state change takes place from B. But as the input sequence is consumed, and the NSFM terminates in an accepting state (G), we can conclude that 0 is a member of a regular language accepted by this NFSM. So is 1, 01, 10, 010, 101, 1010, etc (which can be checked against the above NFSM).

NFSMs are pretty handy when modelling reactive systems. Also, modelling a system using NFSMs is easier than using deterministic FSMs. It would seem that NFSMs are more powerful than DFSMs given the more extensive rule sets of parallel state transitions and epsilon transitions, but it can be proven that for any regular language recognized by a NFSM, there exists an equivalent DFSM that accepts that language (and vice versa). This means that there will always be an equivalent DFSM for any given NFSM. This concept allows us to design or model systems using NFSMs, and later convert it into an equivalent DFSM.

In abstract machines (such as the FSMs we saw above), the power of the machine is another way of saying it can recognize more (regular) languages. There are limitations of FSMs that are addressed by another type of abstract machine known as the pushdown automata (PDA). This can be thought of as a FSM plus a stack that can be used to store information or state in a LIFO (Last In First Out) fashion. This enables the PDA to store bit values in the stack, and come to an acceptance state only if the stack is empty once the input symbol sequence is processed. If you notice, this is a different acceptance criteria from the FSMs we saw earlier. The below image shows a snapshot of a PDA in operation.


Push Down Automata (Source)

If the power of abstract machines are determined by the languages they can recognize, then the mathematical model of computation known as Turing machines are more powerful than both FSMs and PDAs (This is another way of saying that there are sequences of input symbols that cannot be processed by FSMs or PDAs, but which are accepted by Turing machines).


Mechanical model of a Turing Machine (via giphy.com)

Alan Turing published an article in 1936 titled ‘On Computable Numbers, with an Application to the Entscheidungsproblem‘, in which he first described Turing machines. A Turing machine can be thought of as a FSM with an infinite supply of a tape medium that it can read from and write to. Basically, the Turing machine reads a symbol from the tape which serves as the input symbol, and based on this input symbol, either writes a symbol to the tape, moves the tape left or right by one cell (a cell being one read/write unit of the tape which can contain a symbol), or transition to a new state. This seems a simple and basic way of operation, but Turing machines are powerful because given any algorithm, we can construct a Turing machine that can simulate that algorithm. Also, more interestingly, Turing machines have the innate ability to halt or stop, which gives rise to the halting problem.


Turing Machine (Source)

You can find an online Turing machine simulator at this link if you would like to experiment with Turing machines (I would suggest going through the tutorials in the simulator, and then trying out with the ‘even number of zeros‘ example which is simple yet explains the fundamentals). Also, this video contains one of the best and shortest explanations of Turing machines I have seen yet.

In modelling computations and abstract machines, Turing machines are the most powerful abstraction we have today, and all out modern computers, program code, and algorithms, are based on it. The reason it is powerful is, anything that a Turing machine can simulate, can be constructed physically in the world today. But something that has no Turing equivalent system, cannot be constructed in reality (i.e. we have never come up with a way of doing computations or any mathematical model, that can do more than what a Turing machine can do – this is why for example, we do not have any system today which can solve the halting problem). In this sense, Turing machines are at the forefront of what is computationly feasible. Maybe the next stage in this evolution is Quantum computing, but that would be a post for another day.


Stay tuned for post 2 in this blog series, where we will explore the early mathematical models of concurrency, Dijkstra’s 1965 paper which addressed concurrency control for the very first time and gave rise to the dining philosophers metaphor, and how Carl Hewitt’s claim that logical (mathematical) deduction could not carry out concurrent computation due to indeterminacy, laid the foundations for the Actor model.


The Design Thinking Approach

Design Thinking

Design thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.” – Tim Brown | President and CEO, IDEO

Today evening, I attended a very inspiring talk titled ‘Trends in Innovation and Knowledge’ delivered by Dr. Raomal Perera, on innovation, creativity, and particularly the design thinking process. It was a short and succint talk touching upon how the economic center of gravity has traversed the globe from past to present, the mechanics of an IDEO team who innovated and  improved the design of a shopping cart, and the design thinking process. This post is mainly about the key takeaway from the session (design thinking), interwoven with a little bit of my own insights, learnings, and musings.

Design thinking, both the philosophy and the process, has a rich history as I found out at this article in medium. It is a human centric problem solving and innovation approach, which is creative, effective, and applicable to many varied fields and industries. During the talk, Dr. Raomal discussed and conveyed the concept that ‘design’ is not a product, or artifact, or aesthetics, nor document/diagram, but a process. This brought to my mind (coming from a software engineering background) a quote from a recent book I read by James Coplien titled ‘Lean Architecture for Agile Software Development’. That particular quote goes as follows (emphasis added by me):

You might ask: Where does (software) architecure start and end? To answer that, we need to answer: What is architecure? Architecture isn’t a sub-process, but a product of a process – a process called design. Design is the act of solving a problem” (Lean Architecture for Agile Software Development, James Coplien – 2010)

To me, many of the ideas in the design thinking process resonates with the idealogy and outlook that James Coplien has towards building really great software systems. His many talks on capturing the human mental model in code, and his formulation of the DCI (Data, Context, and Interaction) architecure, is testament to that opinion. The design thinking process and approach, used correctly, would be a game changer in any industry, domain, field, and academia. More so, in the software services domain, whose core objective could basically be summed up as eradicating end user pain points and enhacing their experience through value addition.

Of course as Dr. Raomal mentioned at the end of his talk, in order for any creativity, innovation, and seeds of design thinking to take root in a company or enterprise, there should be a high level of support and fostering in terms of culture, habitability, and resources (and in this context resources does not always equate to R&D budgets). But done right, the design thinking approach will produce amazing products, services, and systems, that go beyond end users expectations, and solves their problems in some of the most innovative and creative ways possible.

The video at this link  contains a short but informative high level overview on what design thinking is all about. For a short explanation of design thinking process and components, the video here does a great job in my opnion. Finally, for those who want to delve deeper into the design thinking process and related activities, take a look at this Stanford webinar

The Microsoft Ecosystem for Artificial Intelligence and Machine Learning


The dawn of artificial intelligence… hopefully the human race will map a date and place to it somewhere down the line in the future, if we survive the technological singularity that is… But then again, the technological singularity is (was?) a hypothesis, and a hypothesis is just that: a hypothesis.

Techno satire aside, the dawn of artificial intelligence is still yet to see the light of day (no pun intented). We are still progressing in the research of AI, and the difficulty is in the word ‘Intelligence’… it is a rather large bucket into which many definitions are thrown, and each research milestone that meets a particular criteria or definition will announce that AI is progressing the way it should. The positive aspect of this fragmentation and branching out, is that it has given root to so many of the advances we see in the world around us, from self driving cars, to personal assistants, to adaptive machines learning from the content we humans have produced.


The history of traditional AI research had its beginnings in the summer of 1956, at a Dartmouth workshop titled “Dartmouth Summer Research Project on Artificial Intelligence”. Most of the attendees there, would go on to become the leaders and experts in the field of AI. The workshop was not a very formal one, leaning more towards discussions and talks between individuals and groups, in the main math classrooom at the top floor of the college. Some  of the outcomes of the workshop were symbolic methods, early ideas for expert systems, and ideas on deductive vs inductive systems. Of course, the advent of digital computers at around the same time (actually, earlier) was a great catalyst and platform to implement many of the ideas of AI research in practice. But it was here that many found out, creating the general AI everyone envisaged was not so easy.

Today, there is a major shift in AI research and trends. The areas of machine learning and deep learning networks have sprung up, and coupled with the power offered by cloud computing, has made major strides into areas previously unimagined. And unlike the past decades where AI was driven by the scientific community, the current paradigm shift is steered by enterprises and consumers, who are digitizing their everyday lives and experiences. Satya Nadella, the current CEO of Microsoft, made the following statement in 2016, at the world economic forum held in Switzerland:

This new era of AI is driven by the combination of almost limitless computing power in the cloud, the digitization of our world, and breakthroughs in how computers can use this information to learn and reason much like people do. What some are calling the Fourth Industrial Revolution is being accelerated by these advancements in AI, and this means that all companies are becoming digital companies — or will soon be.

This is an important statement that gives meaning and validity to the advances Microsoft has made in its own cloud platform and stack, Azure. The ‘new era’ of AI hinges on the fact that cloud computing has empowered the enterprise, science, technology, government, and society, to undergo the ‘Fourth Industrial Revolution”. In this regard, Microsoft has made many advances, and set the stage for some amazing ways of learning, utilizing, and working with AI and machine learning.


At times, the difference between AI, machine learning, and deep learning is a bit fuzzy. Historically speaking, AI gave rise to machine learning, and machine learning in turn gave rise to deep learning. The rise of machine learning was a shift in thinking, of how machines could learn from existing data, and adapt or change their behavior. This differs from traditional (enterprise) AI containing hard-coded heuristics, which were essentially knowledge based systems with well defined inference rules.

Deep learning is the new kid on the block, a sub-field of machine learning that is trying to work with algorithms inspired by how the human brain functions. It revived the study of artificial neural networks (which was lying dormant for a decade or two) because now we have the computing power (cloud computing again) to work with millions of neural networks meshed into a “Deep network”, and mimic how the human brain learns, better than any technology has ever done before.

In the context of enterprises and consumer technology, traditional AI systems made the human being a decision maker in the value chain. Machine learning systems can learn from existing data and predict results, events, patterns, and objectives, but humans still do take on the role of applying prescriptive actions based on these results. For exampe, in credit card fraud detection using machine learning, suspicious activity can be inferred using transactional data, but human actors need to take the next steps in terms of what to do based on that information. A major goal of machine learning is for machines to learn and apply prescriptive actions, removing the human altogether from the equation (see image below).

Coming back to the cloud technology stack and ecosystem provided by Microsoft for AI, machine learning, and deep networks, the latest cutting edge offering is the Cortana Intelligence Suite (CIS). This is a collection of cloud technologies and tools, that enables anyone to build end to end advanced analytics systems and pipelines. The short (but comprehensive) channel9 video at this link by Buck Woody from the machine learning and data science team at Microsoft, gives a great overview of the CIS stack and its components. The below image also shows the individual components of CIS, and the general direction of data flow:


Source: biz-excellence.com/2015/10/12/cortana-intelligence-suite/

The components of the CIS stack are as follows (given below in no specific order, but the list roughly starts from where data is consumed, and finishes where informative decisions are made or automated):

  • Azure Data Catalog – This is a metadata repository provided for all kinds of data sources, which is available for discovery by users. Whether on-premise or off, any data that is discoverable will be tagged and labeled here, and provide users with easy access to locating data, and remove any duplicate effort related to data collection.
  • Azure Data Factory – The data factory is used to orchestrate data movement. Data from disparate sources and systems need to be moved around in the pipeline or advanced analytics system being implemented, and the data factory helps move around all this data within the system wherever it is required.
  • Azure Event Hub – This is a very high performant telemetry ingestion service ,which can be used to consume and store event data from millions of events generated by IOT devices, sensors, and other hardware or software components which raise events of very large volume, variety, and velocity.
  • Azure Data Lake – The data lake is used to store very large amounts of structured, semi-structured, and/or unstructured data. The data lake is composed of two parts: the data store, and a query mechanism whereby many languages can be used to query the ‘lake’  of data available. Data lakes are usually used to store mega large volumes of data in their raw native format, till they are needed by a system or a process.
  • Azure SQL DBData Warehouse, DocumentDB – These are all used to store and relate the data in a system, with Azure SQL DB being the usual MSSQL offering on Azure. Azure DW is a cloud data warehouse solution which is highly scalable. Azure DocumentDB is the NoSQL database hosted on Azure, and is similar to other document based databases, but offers higher performance and scalability. A great case study of DocumentDB is when Next Games used it as the backend for their mobile game ‘The Walking Dead – No mans land’.
  • Azure Machine Learning, Microsoft R Server – Azure ML and R Server, are the environments where the learning takes place. Predictive analytics and models are created using the data available, catering to many different consumers. Keep an eye out, as I will be posting more articles on machine learning using Azure ML studio.
  • Azure HDInsight – This is Microsoft’s Hadoop infrastructure implementation, which is Apache compliant, with loads of functionality and features added on top of it. If you are thinking Big Data, this is what you need.
  • Azure Stream Analytics – Sometimes, the data that we need to analyze is produced realtime, and has a very high production velocity. It may be the case that this kind of data needs to be analyzed in realtime (or near-realtime), as and when it is created, and this is where stream analytics comes in. In scenarios where IOT devices, sensors, vehicles, etc. are producers of high velocity data which needs to be processed and analyzed in real time, streaming analytics can provide endpoints for consuming and analyzing this data.
  • PowerBI – This is Microsoft’s visualization tool. This is where data comes in and you can get rich visualizations out. Available as services for being consumed by a variety of clients, and also offered as a desktop client and app for mobile devices.
  • Cortana, Cognitive Services, Bot Framework – These are programming environments exposing services which can be used to build interfaces where human computer interaction takes place. Here we would like to ‘talk’ to the system and get insight, answers, predictions, and suggestions, from all the data processing we did in the pipeline. Cortana is the well known digital assistant in windows devices, but also it is an API which you can program against. Cognitive services are a set of APIs ranging from text, image, and speech recognition, to translation and content recommendation, and much more. Finally, the Bot Framework is a great tool to build interative and intelligent bots, that can help and converse similar to a human being interacting with you.

So, in a nutshell, that’s the spectrum of technology and tools you would be working with in the CIS stack. Of course, you are not required to use each and everything listed above. Rather, you would select the tools that work best for your requirement, and build your customized implementation accordingly.

Imagine a scenario, where the executives in your enterprise task you with building a complicated advanced analytics system, in order to better leverage business intelligence and predictive analytics for improving all facets and units of the business. Now imagine doing it using in-house software and hardware, purchasing tools and licenses that will cost thousands of dollars, and programming from scratch to utilize in-premise nodes and clusters to do powerful computing and number crunching.

If you look at the cost and effort associated, and compare it with the fact that I can login right now to Azure ML studio for free (using my msn id), and build a predictive analytics model and host it as a web service within a few minutes to an hour or two, you can see how powerful and cost effective the CIS really is. Of course a real world enterprise requirement would take much more time than that (my example was overly simple), but it would still be far more cost effective and simple to use CIS and focus on the (business) matter at hand, rather than dealing with the mechanics, internals, algorithms, data pipelines, storage, VMs, and all the bugs of a custom solution build in-premise from scratch.

Keep an eye out as I will be posting more articles in the upcoming weeks,  dealing with building end to end analytics solutions using the Cortana Intelligence Suite, machine learning models with Azure ML studio, and concepts/algorithms in machine learning and deep nets.

Engineer or Programmer? The (non existent) Existential Dilemma…


This article describes my opinion of the term and title ‘software engineer’ and the implications behind it, and how it ties (or should tie in) to the viewpoints of the engineering discipline at large. (Blog header illustration created using canva)

I very rarely spend attention on philosophical debates relating to software development, but last week I was reading an article I came across at ‘The Atlantic’, which prompted a train of thought, including this blog post. I suggest you read it before continuing, as it has a very interesting premise on why programmers should not call themselves (software) engineers. I neither agree nor disagree with the author or the article, but it had some very interesting ideas, that motivated me to document my thoughts and opinions in this post…

First, a bit about the term ‘software engineering’ from my own perspective and understanding. Throughout my career, I used to work with people who were (are) referred to as software engineers, themselves being graduates of computer science, information systems, or software engineering, the titles being appropriated to the name of the undergraduate degree that each followed in his or her academic journey. I myself hold a degree in ‘software engineering’ issued by a well known university situated around Regent street, north-west of central London in the United Kingdom. Therefore, the term or title ‘software engineer’ was something I was (and am) quite comfortable with, whether in how I introduce myself, or how I refer to my colleagues.

On the origins of the term ‘software engineering’, the article in question quotes a fact that is commonly taught in the industry and field of software development today, which is that the term ‘software engineering’ was first deliberately used in the 1968 Garmisch-Partenkirchen conference organized by NATO. It is said that the term was coined provocatively, in a bid to challenge software development at the time to align with the rigid processes and methods followed by the established branches of engineering. But it was interesting for me to come across an article by Bertrand Meyer, that provides some evidence and basis that the term was used at least two years prior to the NATO conference, and in positive light, indicating (at the time) that software development should be considered an engineering discipline in its own right.

The established engineering disciplines, are coined as ‘established’ based on the rigid processes, regulatory bodies, certifications, licenses, continuous learning, and ethical codes they follow. I am quite firm in my understanding that this is a good thing. But some of these aspects came about mainly due to the problems and engineering disasters that were prevalent in the 19th and 20th centuries, and saw the need to bring in standards, ethics, regulations, certifications and the rest. There is always a scenario which prompts the need for better processes, regulations, and policies. This was widely characterized in the software development world in the last two decades, where a myriad of development philosophies and approaches were invented. Even Robert C. Martin discusses about the situation, and tries to convey the message that if software developers (engineers?) don’t start getting there act together, we will be imprisoned by regulations and policies dictating how we should develop software.

If the practice and business of developing software is expected to evolve into an (established) engineering discipline, the progress will always be compared to the other mature engineering disciplines. This outlook and comparision was not favoured by some, which resulted in a large number of software development circles stating that software is more art/craft/science than engineering, and that there are better ways to build software than using rigid engineering processes. This viewpoint is widely respected and quite popular in the software development world today. On this side of the wall, software development should be lean, agile, and dynamic, without heavy weight engineering processes and regulations retarding these ideas, which is practical for all sense and purposes. Software is an artifact and material unlike anything that traditional engineers are used to working with, and early lessons taught us that building software  to specifications with rigid processes containing blueprints and schematics was not the best way to go about things. But the problem was (and still is) that the computing infrastructure (used by the public, consumers, clients, business users etc.) which is the end product of software development, still requires the same rigidity, reliability, safety, and conformance that a bridge or building built by a civil engineer should have. This is expected by a society which functions daily on the trust and understanding placed on the infrastructure around them, built by engineers. And therein lies the disdain towards the software development world (mostly by the engineering community, I suppose), where systems go wrong, bugs seem to evolve and reproduce, and software development is an activity that seems random and haphazard. The engineering world has its fair share of problems and disasters, just as there are software project failures and disasters. But a crucial difference is that the engineering disciplines will emphasize and priorotize the regulations, standards, safety to public health, and ethics, even if failing to deliver.

Just as engineering has foundations in the natural sciences, software ‘engineering’ too, has its origins in philosophy, mathematical logic, and various other branches of science, that has no direct correlation to what we do today as developers of software. The early proponents and pioneers who set the stage for software development to be a wide spread phenomenon, were computer scientists, mathematicians, particle physisicts, and information theorists. Not engineers per se. I was searching online and came across the professional engineers ontario website, where the definition of a professional engineering was formally stated. Based on my reading, I specifically chose canada as they are one of the countries who are extremely strict with the title ‘engineer’ and who is licensed to carry it. I would like to directly quote the definition of the practice of engineering, taken from their website:

Professional engineering is:

  1. any act of planning, designing, composing, evaluating, advising, reporting, directing or supervising (or the managing of any such act);

  2. that requires the application of engineering principles; and

  3. concerns the safeguarding of life, health, property, economic interests, the public welfare or the environment, or the managing of any such act.

Rings a bell? At least for me it did. This seems to sum up what I have been doing for the past few years in my tenure as a software engineer. In a sense, what’s missing is the regulatory bodies, the policies, licenses, code of ethics, and the rest of the bells and whistles.

Let me summarize (thank you, if you are still reading) and propose an approach towards building better software, gaining the respect of traditional ‘mature’ branches of engineering, and proving ourselves worthy of the title ‘software engineer’.

  • First, I lied about software development not having a code of ethics: Software engineering does have a code of ethics known as the IEEE CS/ACM Code of Ethics and Professional Practice, but no regulatory body to enforce it. It would be a great start to read this, and attempt to see how applicable it is in our everyday work. We may not have regulatory bodies, policies or practice licences etc. but that is not an excuse for not committing to the best standards and practices, and/or not being ethical. In the popular book Content Inc, one of the main (controversial) ideas conveyed by the author is that you should build an audience before you figure out what it is you want to sell them. Ours is a somewhat similar situation, where we need to build our audience (society expecting traditional engineering recipes) and figure out what to sell them (trust and reliance in software development using tailored/non-traditional engineering processes, backed by ethics and a sense of responsibility).
  • Second, the use of sound engineering principles: The design and architecure of the software intensive systems we build, the practices and guidelines followed when coding, the project management and costing aspects, the control and quality checks, all rely on the application of sound engineering principles in the construction of software. True, there is no universal standard or guideline dictating how we go about doing this (in fact there are a great many variety of approaches and methodologies), but whatever we tradeoff or cut-down on, it would be advisable to always reflect and check whether whatever we are doing can still be comfortably called ‘applying sound engineering principles’, without any hint of doubt in our mind.
  • Third, continuous learning and certifications: The software development world is plagued with people who are too busy learning and have no time to work, or are too busy working and have no time to learn. Jokes aside, continuous learning is something we developers are familiar with, due to the evolving trends and technology waves that tow us in their wake. There is no one person or body to standardize it and tell us the how, why, what, and when to learn, but we have ample certification programs to showcase to employers how good we are. Maybe we should have that same pride and attitude, in showing that we are capable of building great software, and that our certifications and achievements are a testament to our responsibility and obligations towards our stakeholders as well.

That was not a short summary, and I apologize for that. Looking back, this post is not a negative response to the original article at ‘The Atlantic’, but instead I was interested in critically analyzing and identifying the weaknesses in software engineering, which is constantly being targeted by traditional engineering disciplines. That being said, in ending, a big salute to all the software engineers, developers, and programmers, who constantly and daily contribute in advancing and moving the field of software engineering, towards being a mature and well respected discipline and industry.

A Short List of My Favourite Google Chrome Extensions and Apps

The web browser started out as simple program that allowed access to HTML content. Today, it has become a portal for media content, real time communication, and an extensible hub with 3rd party developed functionality. The history of web browsers is very interesting, starting from the WorldWideWeb browser created by Tim Berners-Lee which was initially used at CERN. This blog post details a few of my favorite chrome extensions and apps, which I use more or less on a daily basis.



Momentum replaces the content in a new tab page in chrome with a personalized dashboard, and some very stunning background visuals. It contains a personalized message,  weather information, favorite links, and a to do list.

Chrome Web Store Launcher


The chrome web store launcher enables you to manage and launch your installed chrome apps, as well as search for new apps. It is also possible to define five favorite applications that come up on top for easy access.

Dark Reader


The dark reader extension is very interesting and useful, as it does not just simply invert the colors of the web pages displayed in the browser for easy viewing. Instead, it provides a set of filter and font options, which can be tweaked in order to get an optimal viewing experience that is easy on the eyes.

Turn Off The Lights


The turn off the lights extension does exactly what it says: dims the whole web page except the video you are watching. This extension is a simple but very effective tool for giving a somewhat cinematic experience to the videos that you watch, and also helps to focus on the video content.

PDF Viewer


The PDF Viewer extension provides a really cool and lightweight PDF reader right within chrome. I’ve found out that from the time I installed the extension, I have done almost all of my reading in chrome. The main winning points for me was the rendering/scrolling speed, and the ability to navigate between reading and looking up web references from within the browser itself.

Gliffy Diagrams


The Gliffy Diagrams app is a great tool for all manner of diagramming requirements, from flowcharts to UML views. It has a great collection of diagram types, and some nice themes that can change the look and feel and the colors of the diagrams created.



Postman is a really great Web/REST API client, where you can test your API with HTTP requests. That being said, it is quite a function heavy app, where you can see request-response history, create custom scripts, and document your API. If your work touches on building HTTP APIs, this tool is a must.



Pixabay is a simple app which launches the pixabay site where all the images and videos are available for free. They are released from copyright under the creative commons, and can be used royalty free. A great source of free content for media required by designers and content creators.



And finally, the WordPress app which I use to create all my blog posts (including the current one). This app is a very intuitive program, with a clean and neat interface which is a pleasure to use. Compared against logging into the wordpress admin portal to create blog articles, the wordpress app is a lifesaver.

The above list of chrome extensions and applications are some that I use the most, and maximizes browser capability for increased productivity in my work and research. What tools, extensions, and apps do you use for work, play, or study? It would be great to hear about your recommendations and favorite extensions, in the comments section below…

HoloLens Development using Visual Studio 2015 and Unity HoloLens Technical Preview


[image credits]

Last week I did a session at Dev Day 2016 on the topic “Developing for the HoloLens”, and this post is a tutorial that is complimentary to the session I did, demonstrating the steps of how a simple holographic application can be built and deployed in the HoloLens emulator, using the tools provided by Microsoft and Unity…


  1. What Are Mixed Reality Applications?
  2. Software and Hardware Requirements for HoloLens Development
  3. Developing 3D Holograms using Unity HoloLens Technical Preview
  4. Building and Generating the VS2015 solution from Unity
  5. Deploying the application in the HoloLens Emulator
  6. Useful Links and resources

What Are Mixed Reality Applications?

In general everyone would have quite a good idea of what Virtual Reality (VR) and Augmented Reality (AR) is all about, given the AR/VR devices that have proliferated the market in recent times, plus the large number of articles on AR and VR that is available on the technical new sites, blog articles, and vendor sites out on the internet. With the introduction of the HoloLens, Microsoft created one of the most ground breaking and disruptive innovations in display technology, which introduces a new term called “Mixed Reality” into the mix (no pun intended). Virtual, augmented, and mixed reality are easy ideas to grasp if you see what they try to offer in terms of how we see the world:

  • Virtual Reality offers the user an immersive experience, where “presence” is the key factor. The external (real) world is shut off and the virtual world takes over the user and his senses. Interestingly, these type of applications and virtual spaces have been researched from some time back (including being heavily portrayed in early sci-fi movies and books) as evidenced by this early game called ‘Dactyl Nightmare’ from the early 90’s.
  • Augmented Reality is all about overlaying the real world with graphics and animations which serve to be informative or otherwise entertaining. They could range from scientific application used out in the field, to mapping technology used by everyday people for driving or cycling. Examples range from map overlays, directional indicators and animations over roads and buildings when looked at through phones, and games like Pokemon Go.
  • Mixed Reality is a concept brought into the mainstream recently, in a large sense by the introduction of the Microsoft HoloLens. This is a new way of looking at the world because unlike VR, you can still see the real world around you. And unlike AR, a user is given the feeling that any digital artifact (Hologram) that he is seeing, is part of the real world. This is mainly due to the fact that the Holograms seen through the HoloLens, become part of the real world around you, responding to the solid objects and surfaces in your room, responding to touch and sound, and behaving as if they are part of the real world.


[image credits]

With regards to the HoloLens device itself, I will not go into any detail on the specifications and what it is capable of… You can find a lot of its capabilities in the video at this link. Similar to the session that I did on Dev Day 2016 last week, we will take a developers perspective and see what we need know about setting up a development environment for programming the HoloLens. After that, we will dive into building and deploying a simple HoloLens application, using (free) tools provided by Microsoft and Unity.

Software And Hardware Requirements For HoloLens Development

The HoloLens development tools should be installed correctly exactly as instructed at this link.

The following tools provided by Microsoft and Unity need to be installed in your development system. Unity is a very popular cross platform game engine used to create games and interactive content for many platforms and devices. You can find many resources online for learning Unity. The Unity HoloLens Technical Preview is a build of Unity, that is implemented specifically to help create content for the HoloLens.

When installing Visual Studio 2015 Update 3 (or if you have it already installed in your system) one important thing to note is that the below two nodes are selected in the feature selection during installation:


If the “Tools” and “Windows 10 SDK” are not selected under the Universal Windows App Development Tools node, make sure you select them. In case you have Visual Studio 2015 Update 3 already installed, you can modify the install from add/remove programs and add the above features to the installation.

One particular issue I had was that when I was trying to download the Unity HTP, there were some errors when I was trying to download from the Unity HTP site:


When I clicked the above shown link, sometimes it would download the setup file, and sometimes it would give an error saying the content was not available. In cases where the setup file was downloaded, when running the setup, i would get an error saying “no internet connection” midway through the install. These issues seem to be resolved when I checked again at the time of writing this article. But just in case anyone else is facing similar problems, just click the “Archive” tab, and download a previous beta (which is what I did), which should work without any problems:


Once the Unity HTP setup file is downloaded and run, another (very) important thing to note is to make sure that the “Windows Store .NET Scripting Backend” is selected in the installation feature selection, as shown in below image. If this is not selected, the installation of Unity HTP will not have the proper build configurations to generate the Visual Studio project required for deployment to the HoloLens.


Moving on to the hardware and platform requirements, make sure the development system you will be working on meets the following (minimum) criteria:

  • 8GB of RAM (or more)
  • GPU
    • Should support DirectX 11.0 or later
    • Should support WDDM 1.2 driver or later
  • 64-bit CPU
    • CPU with 4 cores, or multiple CPUs with a total of 4 cores
  • 64-bit Windows 10 OS
    • (Editions: Pro, Enterprise, or Education)
    • Home Edition NOT supported (due to next point)
  • Enable “Hyper-V” in Windows (Not available in Windows Home Edition) from Control Panel > Programs and Features > Turn Windows features on or off
  • BIOS Settings
    • Hardware Assisted Virtualization should be enabled
    • Second Level Address Translation (SLAT) should be enabled
    • Hardware based Data Execution Prevention (DEP) should be enabled

Basically, any decent machine today should meet the above criteria and requirements. The exception would be if you were running Windows Home edition, in which case you might need to consider investing in upgrading the operating system.

Developing 3D Holograms using Unity HoloLens Technical Preview

Ok, now that we have some idea about mixed reality applications, and the basic requirements for a development environment setup, lets fire up the Unity HTP and dive into creating 3D content for our HoloLens application (In each of the steps below, I will walk you through building the scene in Unity, and explain the reasons behind why we select certain settings and configurations).

When you fire up unity (from this point on I will be referring to the Unity HTP as unity), you would be presented with the following window initially (when you run unity for the first time, it will ask you to create a free account if you don’t have one already):unitystart

 Lets click on the new project button, which will take us to the screen shown below:unitynewproject

In the window shown above, enter a project name, select a project location where the project will be saved, and make sure to select the 3D radio button, and click “Create project”. You will be taken to the main workspace of unity, in the new project you have created:


Next, lets configure the main camera for our holographic application. Select the main camera in the left pane, and update its position to be at the origin in the world co-ordinate space by updating the X, Y, and Z values to Zero in the Inspector pane to the right. We will also select the clear flags value to be “Solid Color”:


The main camera is your window into the 3D world we are creating in unity. If we were designing a first person shooter, and not a HoloLens app, the main camera would be what you see through your monitor when you play the FPS game, so your monitor or screen will be displaying what the main camera can see. In the case of our HoloLens app, the HoloLens device will be worn on the head and our eyes will be the vantage point, therefore the main camera will display scenes (via the HoloLens) into our eyes as if our physical eyes are the window which will display what the main camera can see. This is why we will see 3D holograms merged into the actual physical world or room around us, as if they are part of the natural objects we see normally with our eyes.

Next (making sure the Clear flags setting is set to “Solid Color” as described in the last step), click on the background color to get the color selector window, and set the R, G, B, A values to zero:


Again, if this was a FPS game we were designing, we would have drawn a 3D world to be viewed through the main camera. But in the case of the HoloLens, the device can add light to the existing view you see of your real environment, but it cannot take light away from your eyes (i.e. it cannot display/render black). Anything the HoloLens renders as black will be transparent, which is why we set the color setting as shown above. This will render the whole scene transparent when the application runs and we will be able to see the real world through the HoloLens, plus any holograms we render in the scene.

Next, lets add a 3d object which will serve as our hologram (Holograms can be much more complicated, but for the purpose of this tutorial our 3D cube will serve as a simple hologram). Click on the create drop down in the left pane (hierarchy window) and add a cube. Once added, select the cube in the left pane and set its position to be X = 0, Y = 0, and Z = 5 in order to position the cube 5 meters in front of our eyes (i.e. 5m in front of the main camera). All units of position are in meters, therefore it is very easy to translate the position values with the actual scene we will see via the HoloLens. When viewed through the HoloLens, the cube we just created will be 5m in front of us (placed in the real world), literally.



Now, lets enhance our 3D cube a bit, because it looks very plain and not very interesting. We will texture our cube and have it rotate perpetually, which is one step better than an inanimate white box. We will first texture our cube. Textures are basically image files that can be used to ‘skin’ our 3D objects. If you do a search online for ‘free textures for 3D meshes’ you would be able to download a lot of free textures. The unity store itself has a lot of free textures you can use. I will use a texture that I have which is an image file (WoodTexture.jpg) of wood like material. just open the folder where you have your textures (i.e. image files) and drag the required texture to the assets pane:


Next (as shown in below three images), right click inside the assets pane and create a material asset, and name it “CubeMaterial”. Once the material is created, select it in the asset pane, and then drag the texture we added earlier from within the asset pane into the “Albedo” selection property of the CubeMaterial in the inspector pane (you can play around with the metallic and smoothness settings, you will see a preview in the sphere shown in the bottom of the inspector pane). Once that is done, drag the CubeMaterial from within the assets pane and drop it on the cube in the hierarchy pane to the left. This will texture the cube we have in our scene window:




Now that we have a textured cube/hologram, lets animate it a bit by having it perpetually rotate around a particular axis. We will do this by adding a script which will define the behavior we would like to see. Right click within the assets pane and create a C# script, and name it “RotateCube”. Once created, right click the script and select open, which will open it in visual studio. Delete the Start() method in the script, and add a public instance variable of type float, with the value 20f. Inside the Update() method, add the following statement:

transform.Rotate(Vector3.up, speed * Time.deltaTime);

Don’t worry too much about the syntax, unity has a great scripting API and I recommend that you read through to get a better idea of what is possible. After the modifications to the code, the code should look similar to what is shown in the third screenshot below:



Next, save the modifications in visual studio and exit the IDE. Back in our unity work space, drag the script and drop it on the cube in the hierarchy pane. Once this is done, you can click on the “Game” tab on the work space window and click the play button to see the effect of the script on the cube. You should see a rotating, textured cube. Click the play button again to stop the animation.


Ok, we have just created a textured, rotating 3D hologram which we will build and deployed in the HoloLens emulator using visual studio. This will be explained in the next section. Save the scene we just created by going to File > Save Scene and give a name you prefer, and save the scene.

Building and Generating the VS2015 solution from Unity

Now that our scene is ready, lets tweak some configuration settings required for building and generating the visual studio solution.

First, click on Edit > Project Settings > Quality, and in the inspection pane, set the option below the green windows store application icon to fast:



Next, go to File > Build Settings which will bring up the build settings window. Select the “Windows Store” option under platform, and click on the player settings button, which will present some configurations in the inspector pane. In these settings, select “Other Settings” and make sure the “Virtual Reality Supported” checkbox is checked, and that the windows holographic SDK is selected under the virtual reality SDK list (make sure you have selected the Windows Store platform in the build settings window, else you may not see the proper options in the inspector pane when you click player settings):


Finally, click on add open scenes button in the build settings window and add the scene we created. Select the SDK option to be “Universal 10” and the UWP Build Type option to be “D3D”. Make sure the unity C# Projects checkbox is checked, and click on Build. This will open a file dialog asking you where to save the generated visual studio solution. Create a new folder within the file dialog called VSapp and click on select folder, which will build the solution and save it in the VSapp folder. Once the build is done and the files are generated, the folder contents will be opened by the file dialog:finalbuildsetting


That is all there is to building and generating the visual studio solution based on our unity scene and holographic objects we created. In the next section which will also be the final step of this tutorial, we will open the generated solution in visual studio and deploy to the HoloLens emulator.

Deploying the application in the HoloLens Emulator

If you’ve followed through the tutorial thus far, great! In this final step, lets actually deploy the HoloLens application in the emulator, using visual studio.

Open up visual studio 2015 in administrator mode (right click VS2015 > run as administrator), and open the solution file that was generated by unity (in the above case we need to navigate into the VSapp folder we created and where the solution was saved). Once the solution is loaded and opened, right click the “Package.appxmanifest” file and click on view code. Modify the value of the “Name” attribute of the TargetDeviceFamily tag to be “Windows.Holographic”, and make sure the MinVersion and MaxVersionTested attribute values are similar to the second screenshot below:



Next, select the release mode, x86 configuration, and the HoloLens emulator from the device list as shown in below screenshot:


Now if we select Debug > Start without debugging from the menu, the Emulator should start up and our application should be deployed and automatically run in the emulator. Note that this procedure will take a bit of time, specially if the emulator is running for the first time:




You should be able to see the cube we created rotating in the emulator. You can navigate within the scene using the A, W, S, D keys, and look around by right-clicking the mouse and looking inside the emulator rendered scene. You should be able to walk around the spinning cube, look at if from different vantage points, and move away from it. The surrounding environment is all black, but in the real HoloLens device, this will be rendered transparent, and we will be able to see the real room or environment we are physically in, with the spinning cube hovering five meters in front of us.

Useful Links and resources

The following list would help in any further R&D that you would be motivated to conduct on HoloLens development, as much as it was a good source of reference and information for me:

– Development Tools Installation: Link
– Unity HoloLens Technical Preview download: Link
– HoloLens 101: Link
– HoloLens Documentation: Link
– Spatial Mapping: Link


I hope you enjoyed and found this tutorial useful and motivating. I will be covering more hololens development concepts in depth in future articles such as spatial mapping, emulator settings (specially the device portal option, which is very useful and helpful), input via gaze, gesture, and sound, and maybe some articles on unity (HTP) modelling and scripting as well. Any errata, comments, issues faced when following this tutorial, and general feedback, is most welcome in the comments section below.

The Toshiba MSX and 64Kb of RAM – A Trip Down Memory Lane

When I was young, I grew up surrounded by old computing (and electronics) magazines, issues from the 60’s, 70’s, and 80’s, a majority of them being the popular publication “creative computing”. I used to read them (and re-read them) voraciously during a time period where people around me were working on PCs running Windows 3.1, which were just emerging into schools, offices, and to a somewhat rare extent, homes. I remember being quite an antiquarian when it came to these old computing magazines and publications, but I enjoyed reading and learning about how the very first computers looked and operated, about the DEC PDP-11, the Macintosh, the IBM PC with all the Charlie Chaplin adverts, and how the landscape of personal computing was constantly evolving to become what it is today…

image credits: http://www.criticalcommons.org/Members/ccManager/clips/ibm-modern-times-ad/view



image credits: http://www.pcmuseum.ca/details.asp?id=40979&type=Magazine



image credits: http://www.computerhistory.org/revolution/personal-computers/17/303/1201


microsoft ad

image credits: http://www.alticoadvisors.com/Blog/tabid/183/id/441/Friday-Funday-Blast-from-the-Past-Vintage-Microsoft-Ads.aspx

When remembering all those old computer magazines and articles, what I really miss and remain nostalgic about, is my dads Toshiba MSX home computer which he had bought during the 80’s and was the first ever computer that I learned to program on. I was introduced to programming at a very young age, as I was always curious about this machine that my dad used to work on, where the display was a television set connected via RF cable and programs were stored and loaded on audio cassette tapes. The Toshiba MSX home computer was first announced by Microsoft in 1983, and conceived by the then vice-president of Microsoft Japan, Kazuhiko Nishi. The MSX computer was quite famous for being an architecture that major Japanese game corporations wrote games for, before the Nintendo era (the first version of metal gear was written for the MSX architecture).

Kazuhiko Nishi and Bill Gates in 1978:


image credits: http://f-ckyeahbillgates.tumblr.com/post/73515656034/jara80-kazuhiko-nishi-and-bill-gates-in-1978

The MSX HX-10 model which my dad owned was an amazingly simple machine by today’s standards, having no internal storage or hard drive, instead relying on an external audio cassette recorder to load and store programs on cassette tape (there was a cartridge slot, but audio cassette tapes were widely used as they were a cost effective medium, with plenty available). The HX-10 model was based on a Zilog-Z80 processor and came with 64Kb of RAM (user space was about half that, giving about 28Kb for the user with the rest used by the system), which was basically what you had to work with. The machine used a version of the BASIC programming language known as MSX BASIC, which came pre-installed in the ROM.

A Toshiba MSX HX-10 model with packing and user manuals (the cartridge slot is on the upper right of the machine):


image credits: http://www.nightfallcrew.com/07/11/2010/toshiba-msx-home-computer-hx-10/?lang=it


The rear of the HX-10 (You can see where the television RF cable and audio cassette recorder connect to):


image credits: http://www.amibay.com/showthread.php?48508-Toshiba-MSX-boxed-joystick-lightpen-and-games-(looks-NEW)


A cassette recorder of the type used to save and load programs for the HX-10:


image credits: http://www.retrocom.org/index.php/gallery/gallery-toshiba/hx-10-cassette-recorder-open-304#joomimg


What you see on the television screen once the MSX fires up:


image credits: http://www.nightfallcrew.com/07/11/2010/toshiba-msx-home-computer-hx-10/?lang=it


An early game for the MSX home computer (Blockade Runner):


image credits: http://gamesdbase.com/game/msx/blockade-runner.aspx

My dad used to program a lot as a hobby, and taught me a lot about coding and computer internals using the MSX HX-10 machine we had at home. Nowadays, I look back and remember my dad learning and coding certain sprite based games and programs in (Z80) assembly, as the graphics processing was quite slow when programmed using plain MSX BASIC. Programming in assembly language is generally considered a time consuming, complicated, and unnecessary feat today in the developer community, but back then it was pretty much the norm, as it was the only way to write games with fast and smooth graphics on machines like the MSX HX-10. On top of that, there was no internet to search for solutions when stuck on some problem, all references and troubleshooting were done through books and manuals:


image credits: https://archive.org/details/Z-80_Assembly_Language_Programming_1979_Leventhal



image credits: http://www.computinghistory.org.uk/det/24550/Getting%20the%20Best%20fron%20your%20MSX/



image credits: http://www.computinghistory.org.uk/det/15259/The%20Msx%20Games%20Book/



image credits: http://www.computinghistory.org.uk/det/15261/Useful%20Utilities%20for%20Your%20MSX/

Looking back, it was a very interesting and influential time period for me in life where I was exposed to the technicalities of how computers worked, and got so used to working with the MSX HX-10 that it was a strange feeling when working on a Windows (v3.1 and up) machine for the first time (I would assume it’s simpler to switch from a terminal based 80’s machine to the Windows PC with its slick GUI and internal/external drives at the time, but trust me, I was still used to the whirring tape recorder and the television set as a monitor for a long time).

Anyway, computing today has evolved to the point where (at times) fundamental concepts of how computers work, and simple programming fundamentals, are considered too low level or unnecessary. This is one symptom of the many abstractions, tools, and software that have been built layer by layer on top of the computing machine over the years, which has made it incredibly easy for everyone to use computers, but maybe not understand them well. This is a far different picture from the home computer owners of the 80’s, a majority of whom had to learn how the machine worked and programmed it on their own in order to use it, and even more contrasting from the 70’s, during the days of the Altair 8800 for example, which had to be programmed using only switches. In my case, it was a rewarding experience to have learnt a lot about computers and programming at a very young age, which is a main reason as to what I am doing today in terms of my career. In spite of all the new hardware advances, latest programming languages, software tools/applications, and computing abstractions (all of which I love, and work with everyday), I am still drawn to vintage computers, old programming languages, and historical software systems, in a large part due to my experience with the Toshiba MSX during my younger days. That part of my life and experience is something I can always look back on, and count myself lucky to have been through, and is something which inspires and motivates (and continues to motivate) my love of technology, computers, and coding.

Seveneves by Neal Stephenson

In my (humble) opinion, when it comes to intellectually stimulating and hardcore cyberpunk science fiction, there is no better author than Neal Stephenson, who does a great job in each and every one of his novels to date. For anyone who likes vast concepts steeped in technical, scientific, philosophical, and mathematical landscapes, portrayed in even more vast settings within (science) fiction, Neal Stephenson presents just that, taking your brain on an intellectual roller coaster ride each time you read one of his books. The very first book I read was Cryptonomicon, (a copy which belonged to my uncle) and it blew my mind. I also went on to read two more of his novels, Anathem, and Reamde, which showcased the ability of the author to create unimaginably different worlds and settings from book to book. For instance, Anathem is speculative fiction featured in a monastic setting (with a theme revolving around the many worlds interpretation of quantum mechanics) which has deep philosophical implications, whereas Reamde is a fast paced techno thriller revolving around a MMORPG with crypto-currencies, social networking, and hacker culture thrown into the mix.

Coming back to why I wrote this post, Seveneves is a novel by Neal Stephenson published in 2015, which was also recommended by Bill Gates who says it is the first science fiction book he has read in a decade. It is a work of speculative fiction, and starts off with a major catastrophe brought on by the destruction of the moon, that triggers a series of apocalyptic events. The story spans from the time this event takes place, to thousands of years in the future, which in itself, requires the reader to digest a vast time range of activities pertaining to the human will for survival. Personally for me, it is a book with an interesting premise that evokes a lot of thought and insight.

My advice on reading books by Neal Stephenson: pick a book that has a theme interesting to you, and see if you like his style and presentation (You might learn a lot about Mathematics, Cryptography, The Enigma machine, Van Eck phreaking, UNIX, and the history of world war two just by reading one book, which is what happened when I finished reading Cryptonomicon).

Installing MINIX 3 on QEMU/KVM with networking

For quite some time I’ve been wanting to play around with the source code of MINIX, partly because I like to understand how operating systems, well, operate, and partly because I am a firm believer in the design philosophy of a modular micro-kernel OS architecture (vs the traditional monolithic approach) and wanted to see how one was implemented. I was trying to set up a MINIX installation on VirtualBox but could not get the networking to function properly inside of my MINIX guest OS which I needed. This post is basically a detailed (b)log (expanding on the MINIX guideline) of how I switched to QEMU/KVM and got a working MINIX 3 installation complete with networking, to experiment with (when I mention “networking”, I am referring to accessing the Internet from the MINIX guest OS).

First off, I need to mention my host OS is Ubuntu 16.04 (32bit) running on a core 2 duo machine with 1GB of RAM. I also enabled the virtualization extensions in my BIOS (required by KVM, else there will be an error during the MINIX installation), which is quite trivial to configure in your machines.

A word about QEMU and KVM: QEMU (short for Quick Emulator) is an open source machine and peripheral emulator, focusing primarily on portability. KVM (Kernel based virtual machine) was originally a fork of QEMU, is a Linux kernel module that handles virtualization, and is now part of the main Linux source by default. To make a long story short, QEMU is a stand alone software, but will use KVM if it is available. There are many online resources on these two technologies which can be referred for a further understanding of emulation vs virtualization.

In order to install QEMU in Ubuntu, we need to type the following in the terminal:

sudo apt-get install qemu qemu-kvm libvirt-bin

Once this has run, QEMU will be installed in your host system.

Next, we will create a folder which will hold our QEMU VM image. In my system, I created a folder named “qemuVMs” in my Home folder. Also, we need need the MINIX 3 ISO file that will be used to install the MINIX system, which can be downloaded from the MINIX download page. At the time of this writing there were two versions available for download, and I selected version 3.3.0. Once the file is downloaded, you can extract the contents to get the ISO file, which needs to be copied into the folder we created earlier (in my case /Home/qemuVMs/).

Once the ISO file is in our folder, we can fire up a terminal, navigate to the folder containing the MINIX ISO file, and type the following command to create the VM image:
qemu-img create minix.img 2G
The above command will create a VM image named ‘minix’ with 2Gb space to hold our MINIX system (you can change the image name and size as you see fit). Now the contents in the folder should be something like the following:

Screenshot from 2016-06-11 22-33-52

Once the VM image is ready, we can boot the ISO file by running the following command at the terminal:

qemu-system-x86_64 -localtime -net user -net nic -m 128 -cdrom minix_R3.3.0-588a35b.iso -hda minix.img -boot d

This command basically tells QEMU to use the minix_R3.3.0-588a35b.iso file in the minix.img VM, and to allocate 128Mb of RAM for the VM. Please note to replace minix_R3.3.0-588a35b.iso with the name of the ISO file you downloaded, when running the above command.

If everything goes as planned, you should be taken through the normal MINIX 3 installation routine, which you can follow based on the guidelines given in the MINIX site. There are two important points to note during the MINIX setup:

1. When the option to select a network interface is given, select the “Virtio network device” option.
2. When asked whether to configure network using DHCP or manually, select “Automatically using DHCP”.

(We can always change the above network configuration in MINIX by typing “netconf” at the MINIX command prompt, but setting them during the initial setup will save us a step when we reboot into the new system)

Once MINIX 3 is installed in the VM, we need to change some configurations in the newly installed MINIX system, in order to utilize the virtualized disk and network drives and have internet access from the MINIX system. So boot into the new MINIX system by typing the following at the (Host OS) terminal:

qemu-system-x86_64 -rtc base=utc -net user -net nic -m 128 -hda minix.img

Once we are logged into the MINIX system, we need to go up one directory from the default file location, and navigate into the etc/ folder to modify the boot.cfg.default file. The following image shows how I have navigated from the point I was logged into the MINIX system till where I am going to edit the configuration file using the (ported) vi editor already available in the MINIX system:

Screenshot from 2016-06-11 23-11-07

When I open the file with vi, I need to add a new menu line with the following contents:
menu=Start MINIX 3 latest serial virtio:load_mods /boot/minix_latest/mod*;multiboot /boot/minix_latest/kernel rootdevname=$rootdevname $args cttyline=0 virtio_blk=yes

(basic editing commands in vi can be found in numerous sites online. Generally you will type shift + i to edit text, escape key to finish the edits, then shift + ‘:’ to get the vi prompt, and type “wq” at the vi prompt to write changes to disk and exit the editor)

You can see the last three lines in below screenshot, where I have added this new line of text in the boot.cfg.default file:

Screenshot from 2016-06-11 23-20-13

Once the above changes are done, we can exit the editor. We are still in the /etc/ folder, so we need to go back up one step in the file hierarchy, and then navigate into the /bin folder for the next step. Inside the /bin folder, we just need to run the update_bootcfg (just type “update_bootcfg” at the MINIX command prompt and press enter) command to make the changes we did in the boot configuration file come into effect. The below screenshot shows this (from the point we exit the vi editor):

Screenshot from 2016-06-11 23-29-49

After the step above is completed, we can shutdown the system by typing “poweroff” at the MINIX prompt, and reboot into the system using the following new command:

kvm -net nic,model=virtio -net user -drive file=minix.img,if=virtio -serial stdio -m 128

(It would be easy to have this command in a shell script so that we do not need to type it each time we need to fire up the MINIX guest OS)

Now when you boot into the MINIX guest OS, you should see a new option named “Start MINIX 3 Latest serial virtio” which is what we configured in the boot configuration file earlier (option 6 in below image):

Screenshot from 2016-06-11 23-37-24

If we select this boot option and login to the system, you would find that we have full access to the internet via the virtual network interface we have configured. If you do a “pkgin update” at the MINIX prompt, you should be able to see MINIX retrieving package details over the internet and updating the package database.

If you run into any issues when installing MINIX 3 (with networking) in QEMU following the steps outlined above, please do leave a comment.The MINIX3 Google group is also a great place to check for common issues and support, and a better forum to put your discussions/problems forward, as it will benefit the whole MINIX community.

2013 in review

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 11,000 times in 2013. If it were a concert at Sydney Opera House, it would take about 4 sold-out performances for that many people to see it.

Click here to see the complete report.