My Technology Stack in 2016_07 and How I Prefer to Choose it

No Comments



Social Process Assessment Based Selection Criteria



Obviously to some extent my Technology Stack will change during my life time. At least I intend to keep on learning and try to be critical of my previous choices. However, as of 2016_07 I do have an opinion that the projects that I see to be among the technically most successful projects in the world, for example, the Vim editor, the REDUCE Computer Algebra System, the Linux and BSD kernels, the Scilab, the gnuplot and many other mathematics related projects, the GCC, etc. are all projects that have been using the very same technology literally for decades. I find the architecture of the REDUCE Computer Algebra system to be particularly ingenious, because it ships with its own Lisp implementation that is written in portable C and all the hard math part, symbolic calculation included, is written in Lisp. Truly smart. As of 2016 the REDUCE has survived over 40 years and is still competitive in year 2016 and is still being maintained by its original author and has an extremely advanced and skillful and professional user base. Those projects demonstrate that it is possible to choose technology that is available for a long time and that technical excellence combined with technically disciplined and technically smart and socially smart developers outlives all fashion trends. My general observation is that business tends to change a lot, but what have not changed (probably) for thousands of years are laws of nature. The capabilities of the humanity to understand the nature changes, from apes to 2016 humans to future aliens, but the laws of nature, presumably, stay the same, at least within a single universe. All of the long-lived software projects are used for processing data from natural sciences or are used by people, who write technical software, not typical business software.

I am not a social person. I LIKE THINGS THAT WORK and prefer technical excellence to empty talk and social turmoil. I as a person have a really bad fit with superficial people and the typical business people are superficial. I as a person fit best with engineers and scientists and otherwise people, who take their time for thinking and for studying things and who are willing to learn new things and can withstand unexpected results. I do not fit into the typical mega-corporate world AT ALL, I'm really at odds with almost everything that takes place there. Start-up world and freelancing is my oyster, or, at least that's the place, where I fit in, provided that the clients are calm, patient, thorough and long-thinking enough.

Given my personal requirements to the technology that I want to develop and use, the long term perspective, the technical excellence (by excellence I mean no sloppiness at API design, attention to speed and memory consumption and the fact that 99.99% of compromises are calculated, not accidental, lack of crashes or at least quick error discovery, calculated portability, etc.), the requirement that at least someone somewhere uses the technology for processing data from the nature, I must avoid all technologies, where the social processes compromise any of those properties. I do not go into all of the details in this blog post, how the social processes can undermine a project for me, but usually the mechanism is that someone breaks a dependency by cutting corners or reduces the portability of a dependency or bloats a dependency by its RAM requirements or speed slowdown or introduces security flaws to the dependency. Lawyers, Governments, various kinds of censorship forms another threat.

Partial list of technologies that have totally failed by my standards: Delphi, Microsoft Visual Basic, Java and other JavaVM specific programming languages, for example, Scala, Microsoft Silverlight, Adobe Flash. I find the fall of the Java a particularly sad tragedy, because a lot of excellent application level work is destroyed with the fall of the Java, but, it is possible to say that the open source community did save the MySQL from the Oracle corporate meat grinder, but did not care to save the JavaVM, with may be the exception of the effort from JetBrains(archival copy).

Specialists of different domains, for example, biologists, mathematicians from different branches of mathematics, physicists, computer scientists, tend to prefer different programming languages. An application that uses the best domain specific libraries that are available on planet Earth has to use multiple programming languages simultaneously. The technically best components usually took a long time to develop, often more than a decade, which means that the technically best components are already created by using technology that has withstood the test of time, probabilistically survived all possible threats, provided that the components are not some corporate property like the Java is in 2016. The C# people have learned the lessons(archival copy) from the Java case and as of 2016_07 I consider the C# to be protected from patent trolls and single vendor financing based risks almost to the extent that I consider C++ to be safe, but the problem that I see with the C#(archival copy is that all of the financiers of the C# seem to be business software developers and business software does not have the requirement to be robust, fast, RAM-conserving. As of 2016 the formal verification does not seem to be much of interest to business software developers. As of 2016_07 the MonoDevelop IDE really appeals to me despite its flawed Vim key bindings support, but, again, the business software orientation of the C# makes me cautious. The psychopaths that run global corporations do not mind Government induced restrictions, including 1984. Whenever some Government requires that every "legal" CPU will contain some Clipper Chip, then the global corporations just accept that and run their business software on the latest and greatest hardware that contains the Clipper Chip, but people like me, who do not accept that kind of an arrangement, might have to produce software that has to run on far less capable hardware, something that is not (yet) known to contain any Clipper Chips. In that scenario the majority of C# users are able to "afford" more computationally powerful computers even, when the disparity is not based on money or the ability to buy the more powerful computers. Fully legal businesses always comply with laws, even if the law requires them to rat out their clients. Businesses that are optimized to maximize monetary profit have an incentive to cut costs by not investing to security measures that can withstand Governments.



My Technology Stack



I know that in 2016 the C# is faster than Ruby, but the Ruby audience is wider, consists more non-business-software-developers. That guarantees that even if Ruby is slow, it will more likely be developed very carefully and by placing technical requirements to a higher priority than business requirements. My personal experience as of 2016_07 is that unless the algorithm resembles some signal processing algorithm, the classical computational complexity optimization combined with data traffic optimization and the dispersal of computation in time and well thought out caching of the computation results gives pretty comfortably usable results even with the PHP. Ruby tends to be faster than the PHP.

I'll use Ruby for build automation, application glue, computer science experiments.

I'll use PHP as a glue between web server implementations and web software core.

3D and end user GUI-s will be based on web browsers, which will suffer the bloatware and security related issues even more than the C#, but at least they are pretty and comfortable to use.

I'll use C# libraries for accessing business related document files.

Some speed optimized algorithms, probably mostly those that never use dynamic allocation and are written to fulfill avionics software robustness requirements, will be implemented in formally verified C/C++, specially those algorithms that might be usable with microcontrollers.

Speed optimized algorithms, where memory is dynamically allocated, will probably be implemented in ParaSail.

I'll need to learn to use/compile/administer the Genode operating system.

Truly lightweight 2D-GUI will probably be implemented by using the Free Pascal based MSEide+MSEgui. The architecture of the Pascal GUI library will probably be based on the Raudrohi State Cluster Specification Type 1 and the library will probably contain a domain specific language interpreter that allows the GUI to be manipulated by using text based interface like web browsers are manipulated by using HTML, except that the communication between the "browser" and the "server" will be full-duplex. The idea is that just like web browsers can be switched without needing any changes at web application code, the Pascal GUI library based GUI-runner can be swapped with web server and web browser based GUI-runner.

Software package management will be based on my own created Silktorrent. May be the the Silktorrent packets might be made available at the IPFS file sharing network.

SQLite is for local file system based data storage and data exchange between different programming languages. Not the fastest option, but nicely portable and robust and eliminates the various byte endianness and bit endianness problems. For more speed critical database applications the Firebird database engine must be studied. PostgreSQL is also OK. Massively multiplayer online game communication and chatroom communication will probably be streamed through RethinkDB, but the RethinkDB is not for storing data, it is only a real-time data stream switch, where different clients register by using observer design pattern and the query statement is defined at registration. As of 2016_07 I have not yet made up my mind, what to think about graph databases, but if the application algorithm got simpler and were somehow substantially faster with a graph database than with other types of database engines, then I would try to study Orly and Titan.

Network traffic anonymization will be based on the Tor.

Notifications will NEVER be based on e-mail. The Telegram.org will be used instead. Due to censorship issues public forums and mailing lists archives will never exist. At best there might be a Tor-hosted copy of the Discourse forum. To hide my and my clients' identity, Tor-hosted wikis and forums will never contain any of my own, custom, code. Bug-tracks and project specific wiki will be based on Fossil. Public press releases are to be written to a blog, for example, a Habari instance, so that people can subscribe to the news feeds by using the Akregator or the QuiteRSS feed reader.

Symbolic calculations are done by using REDUCE. The tools for numerical calculations vary, but the GNU Octave is the first candidate.

Each project is accompanied by a VirtualBox based virtual appliance.
(Update on 2018_06_25: In stead of VirtualBox the QEMU should be used, because the QEMU allows to execute virtual machines on more CPU types than the VirtualBox does.)

I'll just stop the list here or continue it at some later time, because it turns out that the very incomplete list here is already really long and it would take me a lot of time to list everything that I have found, created myself or noted down as something that I have to learn. I admit that I did not expect this blog post to be that long. When I started to write it, I was just thinking that I'll just note down a few comments, explain why I struggle to keep myself off from the very attractive C# bandwagon (and Yes, I do like C#, at least the Mono and MonoDevelop part) and what is the rational behind sticking to the old, unpopular and un-sexy technologies, but, as it often happens with me, when I think that I have only a few comments to write, the explanations just grow and grow.

Thank You for reading this blog post. I'll change, update, add to it, probably continue it, at some other time.



+++++++++++++++++++++++++++

Update on 2016_07_27

I feel very insecure, if I suspect that I not have a broad overview of the situation. Whenever I have even a slightest doubt that my efforts are not on the path that I like on the grand scale, I try to check and verify my position. The following schematics describes programming language technologies from applications programming point of view. From operating system development point of view the schematics would look very different.


The schematics can be seen in greater detail by clicking on it.




As of 2016_07_27 I do not know, if it would work, but one thing that attractive to me currently is to use SQLite3 or some other widely supported database for "program image" and data exchange between different programming languages, Haskell libraries for data processing, C#/Java for data conversion. The "program image" role of the database makes sure that when the computer looses power during the execution of the program and the program starts up later, the program can continue from some previous state. Security wise unverified third party libraries that are used for data import and export can run in some operating system jail or by a operating system daemon that originates from a pool of daemons. Each daemon executes the not-so-trusted code as some daemon-allocated operating system user that has its home folder emptied after every daemon session. If the operating system has been implemented decently, it's OK for those users to read "public" files on that operating system, including read-only binaries and scripts, id est there is no need to copy the program code at every call. The difficulties of handling time related state within Haskell programs is not a problem, because all the "side effects" are handled by software that is written in some other programming language. The benefit of a functional programming language in this architecture is that by its nature it makes the program parallelizable without any complex custom approaches that are used in the ParaSail implementation. As of 2016_07_27 I haven't yet decided, whether the functional language should be Haskell or some form of Scheme or otherwise some sub-set of some Lisp. The simpler the syntax of the language, the simpler it probably is to implement a static code analyzer for it without relying on the Abstract Syntax Tree output of the language implementation. The Wyvern is not the first choice, because at first glance it seems to be a more complex language than Scheme/Lisp/Haskell, but it is not totally ruled out either.

As of 2016_07_27 it seems to me that what I'm really missing from the general overview is some solution, how to use some lightweight containers that are like VirtualBox virtual appliances, but light-weight and with restricted access to host operating system file system and other resources. That way I could say that at least I'm not substantially increasing the attack surface of the system that runs my software. I'll probably have to study the User-Mode-Linux.



+++++++++++++++++++++++++++

Update on 2016_08_02

It seems that I also have to get somewhat acquainted with the Chapel programming language and a related project: Babel library. (Update about the Babel: the project seem to have died in 2012 and its build also failed.)



+++++++++++++++++++++++++++

Update on 2016_08_04

I just want to add that one of the things that helps to navigate the landscape of programming languages is a set of observations about the history of p programming languages. Once upon a time there were wires, then there were switches, then someone came to the idea to feed in the switch positions from paper-tapes, then at some point in time came the perofated cards and magnetic tapes. After some time someone came to an idea that computer might translate text based assembler commands to the binary format itself. After that came the Fortran, COBOL, ALGOL, C, etc. the myriads of "systems programming languages", at different decades, till someone came up with Lisp and from there on it's the era of Python, Java, C#, Ruby. Programming languages like the modern Pascal, C++, Rust, D, Go, ParaSail seem to be just modern or at least relatively modern (Pascal and C++ are quite old) tools for working on a sub-set of modern problems.

The general pattern is that common solutions for solving some frequently occurring problems are written down as software development design patterns and new programming languages are created to make it less laborious to use those design patterns and there's an effort to upgrade old programming languages to alleviate their lack of built-in support for the new software development design patterns. As every abstraction layer has its own, abstraction layer specific, set of flaws, every new programming language is accompanied with tools for detecting, avoiding, those flaws. That flaw detection tooling has many forms and many names. Often times it's built into the interpreter/compiler, but sometimes it has a form of "formal methods", "model checking", "test vector generation". With every abstraction layer implementation there's also the issue of performance. So the topics of algorithmic complexity, parallelization, compiler based automatic optimization come to play. The Java and the Microsoft Visual Basic cases demonstrate, how badly a flawed social process can damage a software language ecosystem, but from technical maturity point of view there are 3 things to evaluate in every programming language and its set of implementations:


  • Does the set of design patterns that a programming language has built-in support for cover the problem domain of a software project?
  • Do the programming language implementations have the tooling for detecting the programming language specific (abstraction layer specific) flaws and what the social process and price of the tooling is, including the dependencies of the tooling?
  • What performance optimizations does the programming language implementation have, including the various overheads of the possible runtimes like the C# CLR, JavaVM, Ruby/Python interpreter, etc?



+++++++++++++++++++++++++++

Update on 2016_10_06

I added an additional requirement to the list of requirements, when evaluating technology. Technically high quality projects are long-term projects, because it takes long time to do things properly. Long-term projects must have its immediate dependencies met long-term. No project is well funded long-term, therefore the long-term projects can only be developed by fanatics that are willing to work on the project for free. Funding speeds up development by buying those fanatics more time for working on the project, but the project deliverables have to be kept up to date, usable, also during those times, when the funding is missing. During economically tough times the fanatics do not have a lot of time to spend on the replacement of outdated or broken immediate dependencies of their project and that imposes an additional requirement that the set of immediate dependencies must be transferable to a working order by a few people in a short amount of time. High learning curve is acceptable, because what matters, is the time that it takes for those people, the fanatics, to apply the updates and those people have already crossed the learning curve or at least they are willing to cross that learning curve.

From software architecture point of view a favorable situation from dependencies replacement time minimization point of view is an architecture, where the immediate dependencies are either very old, stable, long-term, projects that do not need to be replaced or the project has its own layer that separates the immediate dependencies from the rest of the project components. Another property to look for, when wanting to minimize replacement time of immediate dependencies is that the project consists of "relatively individually developable" and "small" modules and that the the number of those modules is minimized. The modules do not need to be independent of each other, but they do need to be "relatively independently" developable and the project tests must include tests that are in a role of integration tests of those modules.



+++++++++++++++++++++++++++

Update on 2018_03_03

Every program, regardless of programming language, runs on hardware. Therefore, for a program to run, there has to be a translation path from the programming language of the program to some hardware specific machine code. (In the context of the current text an FPGA bitstream can be seen as a form of machine code. Hardware does not have to be the classical RAM-machine.) The more translation steps the program has to go through, the less similar to its original, human written, form it becomes. The following schematic does not depict any specific real life scenario, but the schematic does illustrate some of the patterns that exist in real life.




The schematics can be seen in greater detail by clicking on it.

A thing to notice is that a compiler has to include operating system specific customizations and hardware specific customizations. For 2 operating systems, let's say Linux and Windows, and 3 hardware platforms, for example ARM, x86 and LEON, there are literally 2*3=6 compiler branches of a same compiler, for example, LLVM. Each of the branches needs exhaustive testing and requires quite a lot of work to create. The amount of work that it takes to customize a compiler("port a compiler") to a specific operating system and hardware combination is often times so huge that a compiler that is available for one operating system and hardware combination might not be available for another operating system and hardware combination. That may limit the portability, availability, of software quite a bit. Sometimes a workaround may be that the translation/compilation is carried out by using different operating system and hardware combinations. For example, an exotic programming language might be translated to C and then the C code is compiled, sometimes, specially at microcontroller projects, cross-compiled, to run on the target hardware.



+++++++++++++++++++++++++++

Update on 2018_06_25

I wrote a post titled

"Multi-core CPU Production Economics"

to the ParaSail forum.


It's known that the greater the die area, the greater the
probability that at least something is wrong at the die.
That is to say, the bigger is the die of an individual
chip, the lower the yield. The lower the yield, the
more expensive those chips have to be that get the
dies that work "sufficiently well" to be shipped.

It's also known that the more complex a single CPU core
is, the more die area the single CPU core consumes.
May be I'm mistaken, but if I look at the

https://www.amd.com/en/ryzen-pro
(archival copy: https://archive.is/FO1J7 )

then I suspect that the AMD Ryzen CPUs
include even some neural network implementation
to optimize single core pipeline. A citation from the
marketing materials:

---citation--start---
Neural Net Prediction

Increased efficiency from a true AI that
evaluates the current application and
predicts the next steps before they are needed.
---citation--end----

That has to consume some die area even just
for storing the neural network neuron states.
Probably (I'm not sure, if that's the right place) from

https://www.bunniestudios.com/blog/?page_id=1022
(archival copy: https://archive.is/Kaqu0 )

I read that the reason, why Flash memory cards
are so cheap is that their manufacturing costs are
reduced by skipping the testing of the Flash dies
and by having the memory cards each include
at least 2 dies: one is the Flash die and another is
the controller die that keeps track of the flawed
Flash cells. As the Flash cells "burn through" during the
life time of the memory card, the controller tries to
reallocate the data to those Flash cells that have not yet
"burned through". The Flash cells that are flawed right
after the Flash die exits the semiconductor foundry are
handled by the controller just as any other "burned through"
cell and therefore there is no point of thorough testing
of the Flash dies. If the testing is skipped, then the cost of
such "testing" is ZERO and the yield of the (sellable) Flash dies is
also much better than it would be, if they were all required to
be perfect.

As little as I understand, the economic incentive to increase the
yield of sellable devices is the reason behind the different frequency
ranges and core counts of CPU chips.

Interestingly, the newest consumer grade
AMD CPUs (read: chips, where single CPU cores are huge)
tend to include only 8 cores maximum. At the same time,
ARM cpu cores, that tend to be physically smaller and simpler,
are sold also in 64-core chips/bundles. That gives me the reason
to suspect that for economic reasons the huge, general, CPU cores
will not be delivered in great quantities by placing them all
on a single die. There might be multi-die chips, which might
be like the RAM is stacked on top of Raspberry Pi SoC or
there might be fancier ways to place multiple dies on top of each other,
as explained at

https://www.youtube.com/watch?v=Tjkfr3BzbUY

From ParaSail perspective it means that if the
number of huge, general purpose, CPU-cores goes up
at LOW COST CONSUMER ELECTRONICS, not just
at some fancy, expensive, military equipment, where
the high expense of the low yield dies is tolerable, then
the cores will likely to be clustered together at some
tree-like structure. May be there will be 4 cores per die,
those 4-core dies might be then clustered together to
form a 4-layer stack of dies that has 4*4=16 cores. The
4-layer stacks might then be assembled to form a CPU-chip,
may be at some cheaper cases 4 stacks per chip, which
would include 4*16=64 cores per chip. A chip with 3*3 stacks
would contain 3*3*16=9*16=90+54=144 cores.

If the CPU cores are seen as graph vertices and the
connections between the CPU cores are seen as graph edges,
and If most of the cores that run a ParaSail
program, run a work stealing engine, then probably the
work stealing has to be scheduled according to the shape
of the graph, taking to account the possible
congestion of some of the routes.
The graph shape might be a CPU architecture
specific ParaSail compilation parameter. A C++ style
instrumentation

https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html

might be used to fine-tune the work-stealing scheduling for
a particular application.

Another reason for considering the way the cores
are clustered is that even if the cores were made of
superconductors, id est even if literally no heat would ever
be generated by the CPU, there is still the current that
is needed to load the parasitic capacitors that the
lines inside of the CPU form. That is to say, even if
the CPU did not produce any heat at all, there is a
minimum current that is needed for driving the CPU.
That minimum current is dependent on the geometrical
properties of the CPU internal structures, but at the same time
the physical size of atoms sets a limit to how big those
internal structures, for example, current conducting lines,
have to be to make sure that the classical electrical
rules would even be applicable. At some point the
nanotechnology rules will be more influential than the
macro process based classical electrical rules. For example,
tunneling effect might start to determine, how many electrons even
move from point some point A to some other point B.
I do not think that there is much opportunities for reducing
the physical shape of the internal components of CPUs.
I believe that they might be able to replace the materials
and some physical processes that they use, but not the
size of the components. The reason, why I believe that is
that I have summarized some historic data about the
CPUs of different era and my summary, which is a very
rough estimation, is in the form of the following table:

nOfA --- minumum CPU die feature size in number of atoms
nm   --- minimum CPU die feature size in nanometers
f    --- CPU frequency
=============================
| nOfA  | nm   | f       |
-----------------------------
| 60    | 30   | ~3,5GHz |
| 260   | 130  | ~2,3GHz |
| 400   | 200  | ~550MHz |
| 1200  | 600  |  100MHz |
| 12000 | 6000 |    3MHz |
=============================

The thing to notice about this table is that
if one wants to create a relatively reliable
chip that has an electrical line width of
at least about 250 atoms, then the
"economical CPU frequency" is about 2GHz.
That's the reason, why I estimate that
the single cores of the future consumer grade
cheaper hardware will be about 2GHz.
I take it as a cap, when I think about
algorithm design. The 2GHz will be the
approximate bottleneck width of the
non-parallelizable code, at least
at those applications that need
hardware reliability, may be some
radiation tolerance. The less atoms there are
per CPU die feature, the bigger is the
relative size of the "bullet hole" that is
created to the CPU die by a single radiation particle.
 
For comparison, the Raspberry Pi 3 has a
CPU frequency that is less than 2GHz and some
Russian foundry(I'll skip the link for now) also
once advertised that they have a "200nm process".
I as an Estonian think of that "200nm process" of that
Russian semiconductor foundry that that is as
low as they have any motivation to go, because that's
roughly the minimum that they can use for producing reliable
chips for the Russian military industry. Any smaller than
that and the chips might become too sensitive to radiation,
which means, technically they can do pretty much anything that
they can dream up to do and the only thing holding
them back are their social processes. That is to say,
I believe that this statement by the Western propaganda that some
Russian sanctions somehow limit their military industrial
complex is truly just propaganda and nothing more.
I do not know, may be the Russian side wants the
westerners to believe that the sanctions have any effect
while in reality the sanctions do not have any effect.
I really do not understand the statements about the sanctions.
In my 2018_06 opinion the most damaging thing for the
Russian electronics and IT industry is the repression of
free speech and repression of businesses. The rest,
even total lack of exports, they could handle really well,
if the businesses were allowed to flourish in Russia and
if the free speech issues were solved. None of that gets
solved, as long as there is a Czar in Russia and as long
as the Russian culture praises hierarchy, there will be a Czar
in Russia, even if that Czar is not the Putin. At least
someone will be at the top of the hierarchy.

Russia and CPUs might be a bit of a stretch, to say the least,
but it is related to the global electronics manufacturing and
economics. Military industry, including that of the adversaries,
does drive the tech industry at least to some extent.


Thank You for reading my post :-)



I suspect that in the case of safety critical systems,
real-time systems, the maximum delay will depend
on how congested the channel between a single CPU-core
and the "south bridge", input-output hub, is. May be
the future multi-core CPUs will have different core types or
the CPU cores are somehow prioritized, so that the
most timing critical tasks are allocated only to
cores that have high priority IO-access. The ParaSail
compiler should then take the prioritization of the CPU cores
to account.

It's just a wild thought that occurred to me about 10 minutes ago.
Thank You for reading :-)




Comments are closed for this post