My Technology Stack in 2016_07 and How I Prefer to Choose it

No Comments



Social Process Assessment Based Selection Criteria



Obviously to some extent my Technology Stack will change during my life time. At least I intend to keep on learning and try to be critical of my previous choices. However, as of 2016_07 I do have an opinion that the projects that I see to be among the technically most successful projects in the world, for example, the Vim editor, the REDUCE Computer Algebra System, the Linux and BSD kernels, the Scilab, the gnuplot and many other mathematics related projects, the GCC, etc. are all projects that have been using the very same technology literally for decades. I find the architecture of the REDUCE Computer Algebra system to be particularly ingenious, because it ships with its own Lisp implementation that is written in portable C and all the hard math part, symbolic calculation included, is written in Lisp. Truly smart. As of 2016 the REDUCE has survived over 40 years and is still competitive in year 2016 and is still being maintained by its original author and has an extremely advanced and skillful and professional user base. Those projects demonstrate that it is possible to choose technology that is available for a long time and that technical excellence combined with technically disciplined and technically smart and socially smart developers outlives all fashion trends. My general observation is that business tends to change a lot, but what have not changed (probably) for thousands of years are laws of nature. The capabilities of the humanity to understand the nature changes, from apes to 2016 humans to future aliens, but the laws of nature, presumably, stay the same, at least within a single universe. All of the long-lived software projects are used for processing data from natural sciences or are used by people, who write technical software, not typical business software.

I am not a social person. I LIKE THINGS THAT WORK and prefer technical excellence to empty talk and social turmoil. I as a person have a really bad fit with superficial people and the typical business people are superficial. I as a person fit best with engineers and scientists and otherwise people, who take their time for thinking and for studying things and who are willing to learn new things and can withstand unexpected results. I do not fit into the typical mega-corporate world AT ALL, I'm really at odds with almost everything that takes place there. Start-up world and freelancing is my oyster, or, at least that's the place, where I fit in, provided that the clients are calm, patient, thorough and long-thinking enough.

Given my personal requirements to the technology that I want to develop and use, the long term perspective, the technical excellence (by excellence I mean no sloppiness at API design, attention to speed and memory consumption and the fact that 99.99% of compromises are calculated, not accidental, lack of crashes or at least quick error discovery, calculated portability, etc.), the requirement that at least someone somewhere uses the technology for processing data from the nature, I must avoid all technologies, where the social processes compromise any of those properties. I do not go into all of the details in this blog post, how the social processes can undermine a project for me, but usually the mechanism is that someone breaks a dependency by cutting corners or reduces the portability of a dependency or bloats a dependency by its RAM requirements or speed slowdown or introduces security flaws to the dependency. Lawyers, Governments, various kinds of censorship forms another threat.

Partial list of technologies that have totally failed by my standards: Delphi, Microsoft Visual Basic, Java and other JavaVM specific programming languages, for example, Scala, Microsoft Silverlight, Adobe Flash. I find the fall of the Java a particularly sad tragedy, because a lot of excellent application level work is destroyed with the fall of the Java, but, it is possible to say that the open source community did save the MySQL from the Oracle corporate meat grinder, but did not care to save the JavaVM, with may be the exception of the effort from JetBrains(archival copy).

Specialists of different domains, for example, biologists, mathematicians from different branches of mathematics, physicists, computer scientists, tend to prefer different programming languages. An application that uses the best domain specific libraries that are available on planet Earth has to use multiple programming languages simultaneously. The technically best components usually took a long time to develop, often more than a decade, which means that the technically best components are already created by using technology that has withstood the test of time, probabilistically survived all possible threats, provided that the components are not some corporate property like the Java is in 2016. The C# people have learned the lessons(archival copy) from the Java case and as of 2016_07 I consider the C# to be protected from patent trolls and single vendor financing based risks almost to the extent that I consider C++ to be safe, but the problem that I see with the C#(archival copy is that all of the financiers of the C# seem to be business software developers and business software does not have the requirement to be robust, fast, RAM-conserving. As of 2016 the formal verification does not seem to be much of interest to business software developers. As of 2016_07 the MonoDevelop IDE really appeals to me despite its flawed Vim key bindings support, but, again, the business software orientation of the C# makes me cautious. The psychopaths that run global corporations do not mind Government induced restrictions, including 1984. Whenever some Government requires that every "legal" CPU will contain some Clipper Chip, then the global corporations just accept that and run their business software on the latest and greatest hardware that contains the Clipper Chip, but people like me, who do not accept that kind of an arrangement, might have to produce software that has to run on far less capable hardware, something that is not (yet) known to contain any Clipper Chips. In that scenario the majority of C# users are able to "afford" more computationally powerful computers even, when the disparity is not based on money or the ability to buy the more powerful computers. Fully legal businesses always comply with laws, even if the law requires them to rat out their clients. Businesses that are optimized to maximize monetary profit have an incentive to cut costs by not investing to security measures that can withstand Governments.



My Technology Stack



I know that in 2016 the C# is faster than Ruby, but the Ruby audience is wider, consists more non-business-software-developers. That guarantees that even if Ruby is slow, it will more likely be developed very carefully and by placing technical requirements to a higher priority than business requirements. My personal experience as of 2016_07 is that unless the algorithm resembles some signal processing algorithm, the classical computational complexity optimization combined with data traffic optimization and the dispersal of computation in time and well thought out caching of the computation results gives pretty comfortably usable results even with the PHP. Ruby tends to be faster than the PHP.

I'll use Ruby for build automation, application glue, computer science experiments.

I'll use PHP as a glue between web server implementations and web software core.

3D and end user GUI-s will be based on web browsers, which will suffer the bloatware and security related issues even more than the C#, but at least they are pretty and comfortable to use.

I'll use C# libraries for accessing business related document files.

Some speed optimized algorithms, probably mostly those that never use dynamic allocation and are written to fulfill avionics software robustness requirements, will be implemented in formally verified C/C++, specially those algorithms that might be usable with microcontrollers.

Speed optimized algorithms, where memory is dynamically allocated, will probably be implemented in ParaSail.

I'll need to learn to use/compile/administer the Genode operating system.

Truly lightweight 2D-GUI will probably be implemented by using the Free Pascal based MSEide+MSEgui. The architecture of the Pascal GUI library will probably be based on the Raudrohi State Cluster Specification Type 1 and the library will probably contain a domain specific language interpreter that allows the GUI to be manipulated by using text based interface like web browsers are manipulated by using HTML, except that the communication between the "browser" and the "server" will be full-duplex. The idea is that just like web browsers can be switched without needing any changes at web application code, the Pascal GUI library based GUI-runner can be swapped with web server and web browser based GUI-runner.

Software package management will be based on my own created Silktorrent. May be the the Silktorrent packets might be made available at the IPFS file sharing network.

SQLite is for local file system based data storage and data exchange between different programming languages. Not the fastest option, but nicely portable and robust and eliminates the various byte endianness and bit endianness problems. For more speed critical database applications the Firebird database engine must be studied. PostgreSQL is also OK. Massively multiplayer online game communication and chatroom communication will probably be streamed through RethinkDB, but the RethinkDB is not for storing data, it is only a real-time data stream switch, where different clients register by using observer design pattern and the query statement is defined at registration. As of 2016_07 I have not yet made up my mind, what to think about graph databases, but if the application algorithm got simpler and were somehow substantially faster with a graph database than with other types of database engines, then I would try to study Orly and Titan.

Network traffic anonymization will be based on the Tor.

Notifications will NEVER be based on e-mail. The Telegram.org will be used instead. Due to censorship issues public forums and mailing lists archives will never exist. At best there might be a Tor-hosted copy of the Discourse forum. To hide my and my clients' identity, Tor-hosted wikis and forums will never contain any of my own, custom, code. Bug-tracks and project specific wiki will be based on Fossil. Public press releases are to be written to a blog, for example, a Habari instance, so that people can subscribe to the news feeds by using the Akregator or the QuiteRSS feed reader.

Symbolic calculations are done by using REDUCE. The tools for numerical calculations vary, but the GNU Octave is the first candidate.

Each project is accompanied by a VirtualBox based virtual appliance

I'll just stop the list here or continue it at some later time, because it turns out that the very incomplete list here is already really long and it would take me a lot of time to list everything that I have found, created myself or noted down as something that I have to learn. I admit that I did not expect this blog post to be that long. When I started to write it, I was just thinking that I'll just note down a few comments, explain why I struggle to keep myself off from the very attractive C# bandwagon (and Yes, I do like C#, at least the Mono and MonoDevelop part) and what is the rational behind sticking to the old, unpopular and un-sexy technologies, but, as it often happens with me, when I think that I have only a few comments to write, the explanations just grow and grow.

Thank You for reading this blog post. I'll change, update, add to it, probably continue it, at some other time.



+++++++++++++++++++++++++++

Update on 2016_07_27

I feel very insecure, if I suspect that I not have a broad overview of the situation. Whenever I have even a slightest doubt that my efforts are not on the path that I like on the grand scale, I try to check and verify my position. The following schematics describes programming language technologies from applications programming point of view. From operating system development point of view the schematics would look very different.


The schematics can be seen in greater detail by clicking on it.




As of 2016_07_27 I do not know, if it would work, but one thing that attractive to me currently is to use SQLite3 or some other widely supported database for "program image" and data exchange between different programming languages, Haskell libraries for data processing, C#/Java for data conversion. The "program image" role of the database makes sure that when the computer looses power during the execution of the program and the program starts up later, the program can continue from some previous state. Security wise unverified third party libraries that are used for data import and export can run in some operating system jail or by a operating system daemon that originates from a pool of daemons. Each daemon executes the not-so-trusted code as some daemon-allocated operating system user that has its home folder emptied after every daemon session. If the operating system has been implemented decently, it's OK for those users to read "public" files on that operating system, including read-only binaries and scripts, id est there is no need to copy the program code at every call. The difficulties of handling time related state within Haskell programs is not a problem, because all the "side effects" are handled by software that is written in some other programming language. The benefit of a functional programming language in this architecture is that by its nature it makes the program parallelizable without any complex custom approaches that are used in the ParaSail implementation. As of 2016_07_27 I haven't yet decided, whether the functional language should be Haskell or some form of Scheme or otherwise some sub-set of some Lisp. The simpler the syntax of the language, the simpler it probably is to implement a static code analyzer for it without relying on the Abstract Syntax Tree output of the language implementation. The Wyvern is not the first choice, because at first glance it seems to be a more complex language than Scheme/Lisp/Haskell, but it is not totally ruled out either.

As of 2016_07_27 it seems to me that what I'm really missing from the general overview is some solution, how to use some lightweight containers that are like VirtualBox virtual appliances, but light-weight and with restricted access to host operating system file system and other resources. That way I could say that at least I'm not substantially increasing the attack surface of the system that runs my software. I'll probably have to study the User-Mode-Linux.



+++++++++++++++++++++++++++

Update on 2016_08_02

It seems that I also have to get somewhat acquainted with the Chapel programming language and a related project: Babel library. (Update about the Babel: the project seem to have died in 2012 and its build also failed.)



+++++++++++++++++++++++++++

Update on 2016_08_04

I just want to add that one of the things that helps to navigate the landscape of programming languages is a set of observations about the history of p programming languages. Once upon a time there were wires, then there were switches, then someone came to the idea to feed in the switch positions from paper-tapes, then at some point in time came the perofated cards and magnetic tapes. After some time someone came to an idea that computer might translate text based assembler commands to the binary format itself. After that came the Fortran, COBOL, ALGOL, C, etc. the myriads of "systems programming languages", at different decades, till someone came up with Lisp and from there on it's the era of Python, Java, C#, Ruby. Programming languages like the modern Pascal, C++, Rust, D, Go, ParaSail seem to be just modern or at least relatively modern (Pascal and C++ are quite old) tools for working on a sub-set of modern problems.

The general pattern is that common solutions for solving some frequently occurring problems are written down as software development design patterns and new programming languages are created to make it less laborious to use those design patterns and there's an effort to upgrade old programming languages to alleviate their lack of built-in support for the new software development design patterns. As every abstraction layer has its own, abstraction layer specific, set of flaws, every new programming language is accompanied with tools for detecting, avoiding, those flaws. That flaw detection tooling has many forms and many names. Often times it's built into the interpreter/compiler, but sometimes it has a form of "formal methods", "model checking", "test vector generation". With every abstraction layer implementation there's also the issue of performance. So the topics of algorithmic complexity, parallelization, compiler based automatic optimization come to play. The Java and the Microsoft Visual Basic cases demonstrate, how badly a flawed social process can damage a software language ecosystem, but from technical maturity point of view there are 3 things to evaluate in every programming language and its set of implementations:


  • Does the set of design patterns that a programming language has built-in support for cover the problem domain of a software project?
  • Do the programming language implementations have the tooling for detecting the programming language specific (abstraction layer specific) flaws and what the social process and price of the tooling is, including the dependencies of the tooling?
  • What performance optimizations does the programming language implementation have, including the various overheads of the possible runtimes like the C# CLR, JavaVM, Ruby/Python interpreter, etc?



+++++++++++++++++++++++++++

Update on 2016_10_06

I added an additional requirement to the list of requirements, when evaluating technology. Technically high quality projects are long-term projects, because it takes long time to do things properly. Long-term projects must have its immediate dependencies met long-term. No project is well funded long-term, therefore the long-term projects can only be developed by fanatics that are willing to work on the project for free. Funding speeds up development by buying those fanatics more time for working on the project, but the project deliverables have to be kept up to date, usable, also during those times, when the funding is missing. During economically tough times the fanatics do not have a lot of time to spend on the replacement of outdated or broken immediate dependencies of their project and that imposes an additional requirement that the set of immediate dependencies must be transferable to a working order by a few people in a short amount of time. High learning curve is acceptable, because what matters, is the time that it takes for those people, the fanatics, to apply the updates and those people have already crossed the learning curve or at least they are willing to cross that learning curve.

From software architecture point of view a favorable situation from dependencies replacement time minimization point of view is an architecture, where the immediate dependencies are either very old, stable, long-term, projects that do not need to be replaced or the project has its own layer that separates the immediate dependencies from the rest of the project components. Another property to look for, when wanting to minimize replacement time of immediate dependencies is that the project consists of "relatively individually developable" and "small" modules and that the the number of those modules is minimized. The modules do not need to be independent of each other, but they do need to be "relatively independently" developable and the project tests must include tests that are in a role of integration tests of those modules.



+++++++++++++++++++++++++++

Update on 2018_03_03

The schematics can be seen in greater detail by clicking on it.






Comments are closed for this post