This article describes my opinion of the term and title ‘software engineer’ and the implications behind it, and how it ties (or should tie in) to the viewpoints of the engineering discipline at large. (Blog header illustration created using canva)
I very rarely spend attention on philosophical debates relating to software development, but last week I was reading an article I came across at ‘The Atlantic’, which prompted a train of thought, including this blog post. I suggest you read it before continuing, as it has a very interesting premise on why programmers should not call themselves (software) engineers. I neither agree nor disagree with the author or the article, but it had some very interesting ideas, that motivated me to document my thoughts and opinions in this post…
First, a bit about the term ‘software engineering’ from my own perspective and understanding. Throughout my career, I used to work with people who were (are) referred to as software engineers, themselves being graduates of computer science, information systems, or software engineering, the titles being appropriated to the name of the undergraduate degree that each followed in his or her academic journey. I myself hold a degree in ‘software engineering’ issued by a well known university situated around Regent street, north-west of central London in the United Kingdom. Therefore, the term or title ‘software engineer’ was something I was (and am) quite comfortable with, whether in how I introduce myself, or how I refer to my colleagues.
On the origins of the term ‘software engineering’, the article in question quotes a fact that is commonly taught in the industry and field of software development today, which is that the term ‘software engineering’ was first deliberately used in the 1968 Garmisch-Partenkirchen conference organized by NATO. It is said that the term was coined provocatively, in a bid to challenge software development at the time to align with the rigid processes and methods followed by the established branches of engineering. But it was interesting for me to come across an article by Bertrand Meyer, that provides some evidence and basis that the term was used at least two years prior to the NATO conference, and in positive light, indicating (at the time) that software development should be considered an engineering discipline in its own right.
The established engineering disciplines, are coined as ‘established’ based on the rigid processes, regulatory bodies, certifications, licenses, continuous learning, and ethical codes they follow. I am quite firm in my understanding that this is a good thing. But some of these aspects came about mainly due to the problems and engineering disasters that were prevalent in the 19th and 20th centuries, and saw the need to bring in standards, ethics, regulations, certifications and the rest. There is always a scenario which prompts the need for better processes, regulations, and policies. This was widely characterized in the software development world in the last two decades, where a myriad of development philosophies and approaches were invented. Even Robert C. Martin discusses about the situation, and tries to convey the message that if software developers (engineers?) don’t start getting there act together, we will be imprisoned by regulations and policies dictating how we should develop software.
If the practice and business of developing software is expected to evolve into an (established) engineering discipline, the progress will always be compared to the other mature engineering disciplines. This outlook and comparision was not favoured by some, which resulted in a large number of software development circles stating that software is more art/craft/science than engineering, and that there are better ways to build software than using rigid engineering processes. This viewpoint is widely respected and quite popular in the software development world today. On this side of the wall, software development should be lean, agile, and dynamic, without heavy weight engineering processes and regulations retarding these ideas, which is practical for all sense and purposes. Software is an artifact and material unlike anything that traditional engineers are used to working with, and early lessons taught us that building software to specifications with rigid processes containing blueprints and schematics was not the best way to go about things. But the problem was (and still is) that the computing infrastructure (used by the public, consumers, clients, business users etc.) which is the end product of software development, still requires the same rigidity, reliability, safety, and conformance that a bridge or building built by a civil engineer should have. This is expected by a society which functions daily on the trust and understanding placed on the infrastructure around them, built by engineers. And therein lies the disdain towards the software development world (mostly by the engineering community, I suppose), where systems go wrong, bugs seem to evolve and reproduce, and software development is an activity that seems random and haphazard. The engineering world has its fair share of problems and disasters, just as there are software project failures and disasters. But a crucial difference is that the engineering disciplines will emphasize and priorotize the regulations, standards, safety to public health, and ethics, even if failing to deliver.
Just as engineering has foundations in the natural sciences, software ‘engineering’ too, has its origins in philosophy, mathematical logic, and various other branches of science, that has no direct correlation to what we do today as developers of software. The early proponents and pioneers who set the stage for software development to be a wide spread phenomenon, were computer scientists, mathematicians, particle physisicts, and information theorists. Not engineers per se. I was searching online and came across the professional engineers ontario website, where the definition of a professional engineering was formally stated. Based on my reading, I specifically chose canada as they are one of the countries who are extremely strict with the title ‘engineer’ and who is licensed to carry it. I would like to directly quote the definition of the practice of engineering, taken from their website:
Professional engineering is:
any act of planning, designing, composing, evaluating, advising, reporting, directing or supervising (or the managing of any such act);
that requires the application of engineering principles; and
concerns the safeguarding of life, health, property, economic interests, the public welfare or the environment, or the managing of any such act.
Rings a bell? At least for me it did. This seems to sum up what I have been doing for the past few years in my tenure as a software engineer. In a sense, what’s missing is the regulatory bodies, the policies, licenses, code of ethics, and the rest of the bells and whistles.
Let me summarize (thank you, if you are still reading) and propose an approach towards building better software, gaining the respect of traditional ‘mature’ branches of engineering, and proving ourselves worthy of the title ‘software engineer’.
- First, I lied about software development not having a code of ethics: Software engineering does have a code of ethics known as the IEEE CS/ACM Code of Ethics and Professional Practice, but no regulatory body to enforce it. It would be a great start to read this, and attempt to see how applicable it is in our everyday work. We may not have regulatory bodies, policies or practice licences etc. but that is not an excuse for not committing to the best standards and practices, and/or not being ethical. In the popular book Content Inc, one of the main (controversial) ideas conveyed by the author is that you should build an audience before you figure out what it is you want to sell them. Ours is a somewhat similar situation, where we need to build our audience (society expecting traditional engineering recipes) and figure out what to sell them (trust and reliance in software development using tailored/non-traditional engineering processes, backed by ethics and a sense of responsibility).
- Second, the use of sound engineering principles: The design and architecure of the software intensive systems we build, the practices and guidelines followed when coding, the project management and costing aspects, the control and quality checks, all rely on the application of sound engineering principles in the construction of software. True, there is no universal standard or guideline dictating how we go about doing this (in fact there are a great many variety of approaches and methodologies), but whatever we tradeoff or cut-down on, it would be advisable to always reflect and check whether whatever we are doing can still be comfortably called ‘applying sound engineering principles’, without any hint of doubt in our mind.
- Third, continuous learning and certifications: The software development world is plagued with people who are too busy learning and have no time to work, or are too busy working and have no time to learn. Jokes aside, continuous learning is something we developers are familiar with, due to the evolving trends and technology waves that tow us in their wake. There is no one person or body to standardize it and tell us the how, why, what, and when to learn, but we have ample certification programs to showcase to employers how good we are. Maybe we should have that same pride and attitude, in showing that we are capable of building great software, and that our certifications and achievements are a testament to our responsibility and obligations towards our stakeholders as well.
That was not a short summary, and I apologize for that. Looking back, this post is not a negative response to the original article at ‘The Atlantic’, but instead I was interested in critically analyzing and identifying the weaknesses in software engineering, which is constantly being targeted by traditional engineering disciplines. That being said, in ending, a big salute to all the software engineers, developers, and programmers, who constantly and daily contribute in advancing and moving the field of software engineering, towards being a mature and well respected discipline and industry.
No, Im not referring to Einsteins theory of relativity (though I can draw some parallel if I really think about it). Something I was reading in an algorithms/data structures book recently hit a profound note in my thinking, though we all have learnt and overlooked it at times. The topic was on analysis of algorithms using Big(O) notation, and the words that hit home were “relative rates of growth”. We all have learnt this before and understand the simple matter of comparing algorithms based on their growth rates. But when we think of comparing the relative rates of some functions, we are not talking about comparing the functions themselves anymore. For instance, say that f(n) = N^N and g(n) = 1000N. It does not make sense to say that f(n) < g(n), which will be true for all N less than or equal to 1000, but false for all N greater than 1000. Therefore, it is the growth rate of f(n) that is compared relative to the growth rate of g(n). In this context, it makes sense to say O(f(n)) > O(g(n)).
This made me jump track and think of Agile estimating. We estimate a base story and then estimate the remaining stories in the backlog relative to the base story value. There is no specific unit defined, but we call them story points. There is no specific time unit associated with the relative growth rate of an algorithm, or the relative complexity of a user story, but we use them in order to make some pretty important decisions in our work. We compare one algorithms complexity relative to another, and estimate task complexity relative to a another ‘known sized’ task. This makes me wonder, what else do we knowingly or unknowingly measure in relative terms in our day to day work?
There is little doubt that measuring something in relative terms produces more accurate results than say, labeling a specific time unit or complexity unit to a mathematical function or user story. It helps us measure and calculate things to large degree of accuracy, and has quite a number of similarities with relativity in physics in my opinion. One such similarity is the concept of a frame of reference: in algorithm analysis, a frame of reference could be the computer platform that the algorithm is run on, in agile estimation the definition is a bit fuzzier but could be the team size for example. You could come up with certain relative values in one frame of reference, and a completely different set of values in another, whether its relative story points, growth rates, or velocity (in physics). It’s interesting to see where this kind of thinking can lead us, and even more profoundly interesting would be to see the hint of a large as-of-yet undiscovered unified software theory of relativity, lurking under the bits and bytes out there waiting to be stumbled upon and discovered.
Usually when I come across something technical I need to lookup or do some background research at work, I start digging in the web and other resources… too deep in fact. For instance, if I wanted to study a particular technical concept related to my work, I would start reading it up on the web and almost always I would run into a prerequisite piece of information that I need know in order to understand what I’m currently reading. So I branch out from there and start searching and reading up on the prior knowledge I need, which in turn might spawn new concepts and more prerequisites that I may need to have knowledge of, in order to come up the chain of facts and understand them.
My problem is that the brain does not allow information to be read and stored in a half understood manner. If I read something on a specific topic and conclude that I truly did not understand the whole of it, I start a recursive pattern of reading and researching related concepts, facts, history and lots of other information. This can be a good thing, and at times a bad thing.
A real life scenario might paint a more intuitive picture: A few days back, I came across the STAThread and MTAThread attributes used in .Net for communication with COM constructs. I wanted to know why and how they are used in .Net, what thread apartments are, what roles the COM objects play, etc. The objective of this exercise was to be better armed with knowledge on .Net and COM communication when I go about my development work. Initially I had opened a few MSDN pages explaining the attributes in .Net, which prompted me to look up and search on COM and COM implementation, which led me to STA (Single Threaded Apartment) and MTA (Multi Threaded Apartment), which led me to…. well, you get the idea. About thirty to forty browser tabs later, I was getting ready to implement some C++ code, trying to build the COM server described in the current web page I was reading. If I had run into any errors in the C++ code, I would have started off a new search on that issue…. and so on.
The above scenario maybe familiar to some, and others maybe shaking their heads already… But looking again, isn’t this how we software developers are encouraged and motivated to work? To dig deep into facts, to read source code examples, write them and test them, and gain all the knowledge needed to master what we do not know? It is how most of us work, or at least how most of us are driven to work when we encounter something we need to learn or research at work. Thinking back for a moment on the scenario I described, my main objective was to gain enhanced understanding on the STAThread and MTAThread attributes so that I could approach my development task with knowledge, in case for example, I needed to consume COM services from the .Net application. Starting to read up on it and ending up learning COM interfaces, coding COM servers, and trying to figure out the C++ errors in the COM implementation was just me wandering away from the main issue at hand. My task was not to write COM implementations, but to implement features in a .Net application which might communicate with COM. In projects, almost half of the time you are supposed to spend on providing beneficial value to the customer is spent on recursive research in order to satisfy our curiosity to understand how something works. This results in unproductive stances and deviations from project and customer focused goals. Sating our curiosity and working hard to find out how something works is a good thing, but not when it is conducted within time slots that are meant for work. It would be more appropriate and professional when we concentrate on our knowledge enhancements outside of our work hours.
Does this mean we should go about our daily operations blindly? Not search or read up facts that will help in our daily work? Ignore what we don’t understand or haven’t studied before? That is not the case, as fact finding and doing technical research is a mandatory part of a developers daily work agenda. It is the approach and mindset of how you are doing it that matters. What I would be concentrating in my future work would be the following aspects:
- If I come across an unknown fact, forgotten concept, or anything technically interesting, I would first check whether it is directly relevant or critical to the task at hand. If not, I know i can continue my work in the project and finish some features or functionality, and after work or at home, I can continue researching and reading on what interested me earlier.
- I may come across lots of interesting information and facts, but filtering out obsolete/legacy explanations, articles with bad reputation points, and non-relevant information should be practiced. This would be a smarter and more focussed manner to gain valid and working knowledge, whether at home or work.
- Learn to read through or skim through an article and gather the relevant facts, not read the article like a story or novel.
- Have the big picture in mind. I should be able to connect and structure the disparate information I read about, in my brain. After all, abstractions, the ability to organize and model, and the ability to compartmentalize facts in our heads is what makes us who we are.
- I do not need to know everything under the sun about a topic. I can gain knowledge on that topic incrementally. Accumulating knowledge in a need to know basis and applying it in the same manner can be refreshing and produce faster results.
I cannot be certain that what I have listed above is the best approach, but for me it would be a step in the right direction in being productive at work, enhancing and adding value to the customer, as well as increasing and gaining knowledge and expertise in the workplace. Of course, I will be hacking away at code and reading facts, articles, ebooks etc in my free time, but in a more analytical minded approach rather than in haphazard fashion. Though the title of this blog mentions technical knowledge, it can be anything we want to study or have more knowledge on. The end result is to have an analytical and filtered manner of accumulating knowledge where you learn more things in less time, and increase your efficiency and productivity at work or home.
It was a great week at 99X Technology, as we had just completed the annual Vesak lantern competitions, which was the buzz for the past few days at office. After hours and lunch breaks found people toiling away to come up with the best designed Vesak Lantern. Vesak is the holy Buddhist holiday commemorating the birth, enlightenment and passing away of Lord Buddha. Vesak Lanterns are the artful and creative constructs (along with their associative decorations) that symbolizes Vesak.
At 99X Technology, the different teams (Employees are grouped into teams, and the team that constructs the best designed lantern wins the competition) put all their skills, creativity and talent, to produce some of the most amazing Vesak lanterns I have seen (albeit the fact that our major, and sometimes only occupation, is designing and building software). Being a proud contributor myself in the team I was in, it was great to experience the teamwork, multiple views and opinions of people, and the multi-faceted talents that were manifested. The end products themselves can be seen below:
Recently, I ran into some problems in creating an RSA key pair which is required by github, before committing or pushing code to the online repository. I had downloaded the Rails installer kit (from here), and was trying my first stab at Ruby and Rails. Creating an open repository in github for pushing code required the creation of an RSA key pair, as described in the help pages. This is necessary as github uses SSH for the transfer of files to and from the local machine.
I had successfully created the RSA public and private keys, but ran into issues when testing whether the connection to the github repository worked, as github was not picking up the correct location of the keys in my local machine. To cut a long story short, the following describes what should be done, to avoid the hassle I went through.
When installing the Rails installer kit, Git is also installed alongside Ruby, the Rails framework, and sqlite. When navigating to the RailsInstaller folder after installation, you will find the bin folder inside the Git folder (/RailsInstaller/Git/Bin/) which contains the tools necessary for creating the RSA key pair. You have to include the Git’s bin location in the PATH environment variable (if it is not automatically configured during installation), in order to easily access the tool set from the command prompt. Also, if the ‘home’ system variable is missing, you will have to create a key ‘home’, which has value ‘C:\’
After setting the environment variable, I created a folder named “.ssh” in my C drive, and this is where github will look for the public key. Once the folder was ready, i created the key pair using the ‘ssh-keygen’ command. You have to enter the .ssh folder created as the location (watch out for the UNIX style folder location specifier), and id_rsa as the key (file) name. This will create a private key (id_rsa) and a public key (id_rsa.pub) inside the .ssh folder. Also enter a good pass phrase or hit enter without typing anything for a blank pass phrase (though entering one is recommended):
If you navigate inside the .ssh folder created, you would see the two keys, id_rsa and id_rsa.pub which were created. The next step would be to add the public key to your github account. Here, you will have to open the id_rsa.pub file in a text editor, copy the contents without any whitespace or newlines, and paste the contents as described in the “Adding the key to your github account” section in this page.
In order to test if everything went well, you can use the ‘ssh’ command, typing ‘ssh firstname.lastname@example.org’ (as described in the github help page as well) to see if you connect properly. When the prompt asks you whether you want to continue connecting, and you type yes, a ‘successfully authenticated’ message should be displayed:
The message shown above means that github properly identified your public key and you can push code using Git, from the local repository to github. Note that none of the above steps will work properly if the ‘home’ system variable does not exist or contain the value ‘C:\’, in order to indicate the home location that github uses to search for the .ssh directory. Since I’ve got my relationship with github stable and secure, my next adventure would be to deploy the Rails application at heroku, which will most likely turn out to be a future blog post. My first attempt at Ruby (and Rails, and Git, and sqlite), and I can safely say, “so far, so good”.
Hi, and welcome to Forays in software development. My name is Mark Sinnathamby, and I live in Sri Lanka. I graduated from the University of Westminster, majoring in software engineering, and am currently a developer at Eurocenter DDC.
This site contains blogs and articles on various topics, many of them to do with my experience in software development and tools, in and out of the industry. Please share your thoughts and comments by replying to blog postings or by mailing me at email@example.com. Happy coding!
Subscribe to Forays in software development by Email