tag:blogger.com,1999:blog-52088898694070221532024-03-05T20:33:47.567+00:00The Fast CountryUnknownnoreply@blogger.comBlogger18125tag:blogger.com,1999:blog-5208889869407022153.post-58178039217828740682020-06-15T22:41:00.001+01:002020-06-15T22:41:52.201+01:00Repost: ANTLR Trinity<div>
<p><i>This post is a repost of an article I had on a previous incarnation of this
blog. I hadn't intended to transfer it over, as the technology is old now (ANTLR
is on version 4), but I recently came acros a slide deck online, where the post
was referenced, so I am reposting in case anyone was looking for it.</i>
</p>
</div>
<div>
<p>There are 3 components to a really useful software development technology: innovative features, clear and comprehensive documentation, and solid tools. The recent release of ANTLR v3.0 is a perfect example of this. This parser generator tool has all 3 components and each component is done superbly.</p>
<p><a href="http://www.antlr.org">ANTLR</a> is a parser generator tool that is capable of targeting multiple output languages. Out of the box it will generate Java, Python, C, C#, or Ruby code for parsers. Other target languages are possible if the code generators are written. Amongst its cool features are:</p>
<ul>
<li><p>LL(*) parsing: This is an extension to the normal, top down with lookahead, LL(k) parsing technique. It allows for more powerful parsers than those possible with LL(k). Not something I have needed yet, but I can see it being useful in the future.</p></li>
<li><p>Semantic and syntactic predicates: These are tests that can be embedded in grammar rules to turn certain choices in rules on and off based on a boolean test. These allow parsers to be built that simulate recognizers for context dependent grammars and this makes the automated parser generation applicable to a lot more problems. Again, this is something I haven't used yet, but will be in the future.</p></li>
<li><p>Memoized backtracking: This is a performance improving feature for grammars that use a lot of lookahead. If a parser fails at a choice near the end of the rule it has to go back to the start of the rule and start matching again. The memoization of intermediate matches for a rule speeds this up.</p></li>
<li><p>Unicode support: parsers built with ANTLR recognize Unicode input. ANTLR grammar files themselves do not recognize Unicode characters yet, but Unicode characters can be specified as escape sequences.</p></li>
<li><p>Hierarchical lexers: Most parser generator tools define tokens by means of a regular expression like language. For these tools lexer rules are independent of each other and cannot refer to each other. ANTLR is different: it allows lexer rules to reference other lexer sub-rules. It also allows recursive lexer rules. Now, this is a very useful feature and, for me, tidies up lexer rules a lot.</p></li>
<li><p>Abstract Syntax Tree (AST) generation features: ANTLR has a powerful AST generation feature. When generating parsers I prefer to generate a parse tree or AST and pass that onto a second stage, rather than embedding actions/code in the grammar. This allows the a parser to be used as a module in numerous tools. ANTLR's AST build support is superb and really facilitates that type of development.</p></li>
<li><p>Tree grammars: This is a feature of ANTLR that allows for a parser to match tree structures such as ASTs. I used to use JavaCC and what I normally did was use it to generate a parse tree and then use the visitor pattern to process that parse tree. If the action at a particular node depended on the structure of the tree, it was up to me to track that in the Java code in the visitor class. ANTLR's tree grammars really simplify situations like that and I am really looking forward to trying that out.</p></li>
</ul>
<p><a href="http://www.pragmaticprogrammer.com/titles/tpantlr/index.html">The Definitive ANTLR Guide</a>, available from the <a href="http://www.pragmaticprogrammer.com/">Pragmatic Programmer website</a>, is the main documentation for ANTLR v3.0. The book is well named. It clearly and concisely describes the features of ANTLR v3.0 and how to use them. The book is divided into 3 sections: an introduction to ANTLR and language translation, a reference section for the ANTLR syntax, and a section on how to write predicated LL(*) grammars. I have read through the first section and skimmed parts of the reference and this has enabled me to put together a basic parser/recognizer for Scheme. It really is that straight forward.</p>
<p>In addition to The Definitive ANTLR Guide there is a really good Wiki with tutorials and FAQs at <a href="https://github.com/antlr/antlr4/blob/master/doc/index.md">https://github.com/antlr/antlr4/blob/master/doc/index.md</a>. Lastly, the distribution comes with great sample grammars. There is a Java grammar and a Python grammar in there and these are worth referring to, to see ANTLR put through its paces.</p>
<p><a href="http://tunnelvisionlabs.com/products/demo/antlrworks">ANTLRWorks</a> is the IDE for building ANTLR grammars. It is a standalone editor, written in Java Swing, that provides the standard features that you would expect from an IDE, such as syntax highlighting and error detection. In addition it has a few handy features that you would not expect. Firstly, there is a syntax diagram pane in the GUI. Selecting a grammar rule in editor pane causes a syntax diagram for the rule to be displayed in the syntax diagram pane.</p>
<p>This is very useful for documenting grammars as the diagrams can be saved as graphics (JPEG, PNG, EPS, etc.). There is also a debugger and an interpreter. The interpreter will interpret the grammar and draw a graphical representation of the parse tree for an input string based on the grammar. If the parse fails it still draws the diagram for what was recognized and puts a node in the tree, at the point were the parse failed, to represent the type of error detected. The parse tree graphics can also be saved and are a real help with visualizing how the rules fit together and what choices the parser is taking.</p>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-25509627072707256922020-04-29T15:03:00.000+01:002020-04-29T15:03:15.411+01:00Useful Links: Going faster with continuous delivery<div dir="ltr" style="text-align: left;" trbidi="on">
<p>Just thought I would share a blog post on how Amazon does continuous deployment. The title of the article highlights a key goal: faster deployment of completed features. This is a key metric that identifies high performing teams, i.e., deployment latency. In her book, Accelerate: The Science of Lean Software and DevOps, Nicole Fosgren identified this as one of four highly predictive metrics for high performing software teams.</p>
<p>The section on risk management is especially worthwhile. The risk reduction strategies mentioned in the article can be implemented with AWS Code Pipeline and/or Kubernetes Deployments.<a href="https://aws.amazon.com/builders-library/going-faster-with-continuous-delivery/">https://aws.amazon.com/builders-library/going-faster-with-continuous-delivery/</a></p>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-8598871081813245282020-04-28T14:58:00.000+01:002020-04-28T14:58:02.404+01:00Useful Links: Deploys - It’s Not Actually About Fridays<div dir="ltr" style="text-align: left;" trbidi="on">
<p>This: <a href="https://charity.wtf/2019/10/28/deploys-its-not-actually-about-fridays/">https://charity.wtf/2019/10/28/deploys-its-not-actually-about-fridays/</a>
Read. Contemplate. Incorporate.</p>
<p>Seriously, there are 4 metrics that reliably indicate a high function software organization (see Accelerate, by Fosgren, et al., for details - <a href="https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations-ebook/dp/B07B9F83WM">https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations-ebook/dp/B07B9F83WM</a>):</p>
<ul>
<li>Lead time for changes</li>
<li>Deployment frequency</li>
<li>Time to restore service</li>
<li>Change failure rate</li>
</ul>
<p>This article addresses the 'change failure rate' one, by improving the first two with observability tooling.</p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-87000100963302901652020-04-27T15:07:00.000+01:002020-04-27T15:07:09.133+01:00Useful Links: Microservices Prerequisites<div dir="ltr" style="text-align: left;" trbidi="on">
<p>This is a great article describing what capabilities a team has to have in order to run a system which has a microservices architecture: <a href="https://martinfowler.com/bliki/MicroservicePrerequisites.html">https://martinfowler.com/bliki/MicroservicePrerequisites.html</a></p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-56667475987294374182020-04-25T15:11:00.000+01:002020-04-25T15:11:07.253+01:00Useful Links: Logs and Metrics<div dir="ltr" style="text-align: left;" trbidi="on">
<p>This article explains why storing log messages alone is insufficient for robust operation of a software service. Metrics also need to be gathered and stored. <a href="https://medium.com/@copyconst…/logs-and-metrics-6d34d3026e38">https://medium.com/@copyconst…/logs-and-metrics-6d34d3026e38</a></p>
<p>tl;dr - Log volume can spike dramatically when user activity increases, especially when things go wrong. This makes it possible for an alerting system based on logs to be swamped. For a metrics system, volume increases with the number of metrics collected. This is stable and much less likely to fail or slow down during a crisis.</p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-74587666627608998472020-04-25T14:53:00.000+01:002020-04-25T14:53:00.296+01:00Useful Links: The Practice of Practice<div dir="ltr" style="text-align: left;" trbidi="on">
<p>This is a very interesting talk on practicing for Operational events. The speaker draws parallels with musicians practicing for a performance: <a href="https://www.youtube.com/watch?v=87EhBrC2L1U">https://www.youtube.com/watch?v=87EhBrC2L1U</a></p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-25603504291378336622020-04-23T14:50:00.000+01:002020-04-23T14:50:02.151+01:00Useful Links: Logging Rules of Thumb<div dir="ltr" style="text-align: left;" trbidi="on">
<p>Some very useful advice in here for developers. <a href="https://engineering.hellofresh.com/logging-rules-of-thumb-f6c0f71a2351">https://engineering.hellofresh.com/logging-rules-of-thumb-f6c0f71a2351</a></p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-82377831910785659682020-04-23T12:35:00.000+01:002020-04-23T12:35:30.613+01:00Useful Links: Anatomy of Cascading Failure<div dir="ltr" style="text-align: left;" trbidi="on">
<p>An interesting article on Cascading Failures aimed more at the Dev side of DevOps. The list of design anti-patterns is very useful: <a href="https://www.infoq.com/articles/anatomy-cascading-failure/">https://www.infoq.com/articles/anatomy-cascading-failure/</a></p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-54601582903353123242020-04-23T12:33:00.000+01:002020-04-23T12:33:56.902+01:00Useful Link: PagerDuty Incident Response Documentation<div dir="ltr" style="text-align: left;" trbidi="on">
<p>This documentation from PagerDuty on incident response is pretty good. It would need to be tailored for local conditions but it does highlight aspects of incident response that people might not be aware of (different roles during a major incident, for example). <a href="https://response.pagerduty.com/">https://response.pagerduty.com/</a></p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-14351049254875242842020-04-22T14:58:00.000+01:002020-04-23T12:34:34.099+01:00Useful Links: AWS Cost Optimization 101<div dir="ltr" style="text-align: left;" trbidi="on"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc6KxZlCOZanotnXCqioIBlBORm7My801HOgCJ0xZ8cQ2ZZ1iDKZtBTeEmULh5xjPuPSNojp0tUDyOKW80aouVhTiCDin1a0KkU0qoYf82de43VwV1O0_oVQSCEt5sVczXB2mgDwm1L8Q/s1600/aws_cost_mm.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc6KxZlCOZanotnXCqioIBlBORm7My801HOgCJ0xZ8cQ2ZZ1iDKZtBTeEmULh5xjPuPSNojp0tUDyOKW80aouVhTiCDin1a0KkU0qoYf82de43VwV1O0_oVQSCEt5sVczXB2mgDwm1L8Q/s1600/aws_cost_mm.png" data-original-width="540" data-original-height="281" /></a>
<p>An interesting article on AWS cost optimization. I am not in 100% agreement with all of it (re-architecting apps to minimize inter AZ traffic and not using AWS endpoints, for example), but there are some good tips in there: <a href="https://cloudonaut.io/aws-cost-optimization-101/">https://cloudonaut.io/aws-cost-optimization-101/</a></p>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-91550797013708461872020-04-20T14:53:00.000+01:002020-04-20T15:04:25.560+01:00Useful Links: Trade-offs Under Pressure <div dir="ltr" style="text-align: left;" trbidi="on">
<p>These two posts dive into John Allspaw's (previous Head of Engineering at Etsy) Masters Thesis on heuristics on decision making under pressure, specifically in the context of dealing with an outage to a software service:
<a href="https://blog.acolyer.org/2020/01/22/trade-offs-under-pressure-part-1/">https://blog.acolyer.org/2020/01/22/trade-offs-under-pressure-part-1/</a> and
<a href="https://blog.acolyer.org/2020/01/24/trade-offs-under-pressure-part-2/">https://blog.acolyer.org/2020/01/24/trade-offs-under-pressure-part-2/</a>
There are two noteworthy aspects to this: firstly the subject matter itself is useful. It identifies heuristics that engineers use to make trade-offs during outages.
The second noteworthy thing is the methodology used: it demonstrates both an excellent methodology for conducting incident reviews. The visualization and classification of the timeline is very informative.</p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-58853187450739602972017-09-12T11:44:00.000+01:002017-09-12T11:44:01.048+01:00Paper on Gray Failures<div dir="ltr" style="text-align: left;" trbidi="on">
<p>A great paper annotated by The Morning Paper, this time on the subject of gray failures. <a href="https://blog.acolyer.org/2017/06/15/gray-failure-the-achilles-heel-of-cloud-scale-systems/">https://blog.acolyer.org/2017/06/15/gray-failure-the-achilles-heel-of-cloud-scale-systems/</a>
There are a handful of interesting takeaways from this one:</p>
<ul><li>It is important that monitoring on a system aligns with the clients of the system's definition of failure. The cycle of failure is inevitable unless proper root causes are identified.</li></ul>
<p>Some personal observations:</p>
<ul>
<li>A potential observability gap is the difference between proximate and root cause. For example a service may fail because a proxy server returns an error. The error may be due to timing out a request to an upstream server. The cause of that could be the fact that latency has increased on the upstream server. The root cause of that could be that the working set of the server no longer fully fits in memory and the server is swapping leading to latency. The server itself is not failing any requests, but the proxy is throwing the results away because it is too late arriving.</li>
<li>The "masked failure" area is a great area to search for indicators of failure. If metrics can be found that correlate strongly with later gray failures, then remediation can take place before customers even notice. The trick is finding the correlating metrics.</li>
</ul>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-87545707232225109452017-07-23T17:59:00.000+01:002017-08-19T07:45:56.477+01:00Operational Metrics and Alerts for Distributed Software Systems<div dir="ltr" style="text-align: left;" trbidi="on">
<p>This post will be about operational metrics and alerts for distributed software systems. What do I mean by that? I mean the metrics and alerts that allow operations personel to detect failure of of a distributed software system and helps them to quickly diagnose what is wrong.</p>
<h3>Metrics</h3>
<p>The metrics are measurements of characteristics of the system collected at regular(ish) intervals and stored somewhere for processing - rendering into graphs, triggering alert notifications, etc. Metrics can be divided into 3 categories: input metrics, output metrics, and process metrics. Input metrics are measures of the inputs to the system, for example, the number of user requests, counts of particular characteristics of the requests - where they are from, how large the request data is, counts of particular features in the request (for example, which resources/items/products are being asked for). Output metrics are measures of the output of the system. Examples of these would include orders successfully placed, counts of unsuccessful orders, and, since users often care about it the time to respond to a user request can also be considered an output metric. Good output metrics are a close proxy for dollars earned or saved by the system per minute. Process metrics are measurements of internal operation of the system. Examples of this include the standard host metrics, such as load average, free memory, disk space or inodes free, etc. Process metrics can also include application specific internal measurements, such as the number of times an API call retried before it was successful.</p>
<p>Sometimes the lines between metric categories are blurry. For example, counts HTTP response codes sent back to the client can belong to each of the categories. Typically, 2xx and 5xx response counts are output metrics. 4xx responses are normally input metrics, though if the request is built from data included the response of previous requests to the system, then a case can be made for including them in the output metric category. The category that 3xx responses fall into is entirely application specific.</p>
<p>In a large system, composed of multiple modules, components, or services, then each subcomponent can have metrics of each type. That is each subcomponent or service can have its own input metrics, output metrics, and process metrics.
</p><p>Each of these categories of metrics are useful in different ways. Output metrics are best for indicating the existence of a problem and its severity. Input metrics are good for indicating whether a problem exists in the system itself, or whether an upstream system is at fault. Process metrics are best for drilling down into what is wrong once the existence of a problem has been established.</p>
<p>Metrics should be gathered regularly enough to indicate changes quickly, and should be predictable enough to detect problems easily. The ideal metric's graph should look like a boring flat line when things are okay, and very definitely not be a boring flat line at the point where problems have started.</p>
<h3>Alerts</h3>
<p>Alerts are a notification that a negative unexpected situation has occurred. In practical terms, some metric has changed in a direction that indicates that bad things are happening. Traditionally alerts have been categorized based on the severity of the underlying event.</p>
<ul>
<li><em>SEV 1</em> : The event is severe enough to threaten business continuity if nothing is done, e.g., through a significant loss of revenue or reputation or due to a violation of laws or regulations.</li>
<li><em>SEV 2</em> : The event has a significant business impact, e.g., there is a spike in failing orders, the order rate has dropped by 10%, customer responses are taking 10 times longer than normal or some employees are not able to do their jobs due to a failure in the system.</li>
<li><em>SEV 3</em> : The system metrics indicate that something is seriously wrong, e.g., servers are very heavily loaded or some of the requests coming in are malformed, but the business is not affected and the output of the system looks normal. </li>
<li><em>SEV 4</em> : Some unexpected but not particularly serious change has occurred in the metrics. </li>
</ul>
<p>The typical responses to these events are:</p>
<ul>
<li><em>SEV 1</em> : Page everyone. Think of the scene in the movie Leon where Stansfield asks for everyone. This is likely to require quick, co-ordinated action, PR handling, frantic debugging, and possibly approval for significant expenses. It is better to have people not be needed and there than the opposite in such situations.</li>
<li><em>SEV 2</em> : Page someone (or multiple someones) with the ability and authority to fix the issue. Have fixing the issue be their highest priority.</li>
<li><em>SEV 3</em> : Make a note in Slack or create a ticket in the ticketing system. The issue should be worked on in the near future, ideally before the end of the next sprint.</li>
<li><em>SEV 4</em> : Unless the team is very proactive, don't bother creating these alerts. For very proactive teams, a notification on Slack, or a backlog item to investigate the error may be appropriate. Even for proactive teams, digging into the root cause of such items is often not the best use of the team's time. Creating process metrics on the number and frequency of such events is probably more appropriate. Getting lots more of these weird events, a lot more often, could then be categorized as a SEV 3 event.</li>
</ul>
<h3>Putting them together</h3>
<p>You should identify at least one output metric for the overall system that is providing a service to customers. Ideally, that metric is a close proxy for dollars earned per minute earned or saved. Examples, ads served per minute, page impressions per minute, bytes streamed per minute, successful uploads of customer pictures per hour, etc. It is also good to include latency on requests to the end customer as an output metric.</p>
<p>For aggregate metrics such as sum or average of some values, e.g., the average latency on customer requests, it is good to generate a few more aggregates. Always include the count of the number of inputs int the aggregate. Consider also including quantiles (p0, p25, p50, p75, p90, p99, and p100 are useful). The modal number and median number are also helpful sometimes. If the input values are normally distributed then standard deviation should be included.</p>
<p>Pageable alerts, i.e., for SEV 1 and SEV 2 events, should be on output metrics that meet the following criteria:</p>
<ol>
<li>The metric is clean, i.e., the signal in the metric is not swamped by random noise. If a suitable metric is noisy, it might be less noisy if averaged over a longer time period. Rolling averages can work well.</li>
<li>There should be a significant negative change to the metric. It should either be too large to explain as noise or too long in duration to be caused by noise.</li>
<li>The problem should require human intervention to fix. There is no point in paging someone for a transient blip. It is better to let them sleep.</li>
</ol>
<p>Other things worth paging people for are process metrics that correlate <em>very strongly</em> with a system failure in the near future. As an example, if your system uses MySQL, then a sustained and increasing history list length metric value, is almost certainly going to result in system failure in a few hours after it starts. However, the correlation needs to be very strong to avoid alert fatigue. If it is a 50/50 chance, then it is better to let the on-call engineer sleep until the system actually fails, in most SEV 2 cases.</p>
<p>Related to this, if host metrics going into alarm (load average, CPU usage, disk space or memory free, etc., on a particular host) are good predictor of system failure, then this is an indicator of architectural weakness. Instead of setting up a pageable alert, fix the redundancy and failover architecture instead. </p>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-73505166180238134982015-03-28T12:04:00.000+00:002015-03-28T12:04:16.370+00:00With Great Power Comes Great Responsibility<p>Forth is used as a bootloader for SPARC based machines. One feature that SPARC based machines made by SUN Mircosystems had was the ability to drop back to the bootloader's Forth interpreter by pressing the <em>Stop-A</em> key combination at the console. This suspended the operating system and gave the user an <em>ok</em> prompt to work at. Typically this was used to kick off a kernel debugger or to kick errant SCSI hardware back into line. In effect the Open Boot Prom (OBP), as the Forth based bootloader was branded, was a very lightweight hypervisor.<p>
<p>A consequence of this was that while working at the <em>ok</em> prompt, the user wasn't subject to privilege system of Solaris. People at the console could use this to gain root privileges. The method worked as follows:<p>
<ol>
<li><p>Find the address in memory where the <em>proc</em> structure of a shell that the user has open, i.e., where the shell's process resides in memory.</p></li>
<li><p>Press <em>Stop-A</em> to drop to OBP.</p></li>
<li><p>Write <code>0</code> to the <em>cr_uid</em> field of the processes <em>cred</em> structure. The location of this in memory is easily found from the process address.</p></li>
<li><p>Type <em>go</em> to return to Solaris where there is a shell where the user now has an effective user id of 0, i.e., root privileges.</p></li>
</ol>
<p>Full details can be found at Brendan Gregg's <a href="http://www.brendangregg.com/Sun/obp.html">website</a>. The option to <em>ps</em> that gave easy access to processes' addresses, has been since removed to make this more difficult, but it would still be easy to find with a debugger, for example.</p>
<p>There are a few things to be learned from this:</p>
<ul>
<li><p>With great power comes great responsibility.</p></li>
<li><p>A hypervisor can completely bypass the security controls of its guest operating systems.</p></li>
<li><p>If an attacker has access to a machine physically or via a hypervisor, it is a matter of "when" and not "if" they gain control.</p></li>
</ul>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-1513430238061006272015-01-25T15:34:00.001+00:002015-01-25T15:34:27.605+00:00Learning Forth<p>One of my side projects for this year is to learn the programming language, Forth. Some people might consider this an odd language to learn. It is not a popular language. There are no hot startups using it (that I know of). It doesn't even show up in the top 100 languages in the <a href="http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html">TIOBE Index</a>. However, I am convinced learning it is worthwhile. Some of my reasons for this are: </p>
<ul>
<li><p>Forth is probably the most successful and widely deployed language that nobody has heard of. It is the language used to develop <a href="http://www.openfirmware.info/Welcome_to_OpenBIOS">OpenFirmware</a>. This boot loader is installed on the laptops of the <a href="http://one.laptop.org/">One Laptop Per Child Project</a>, on PowerPC based Apple Mac computers, and on SPARC based computers from SUN Microsystems. It has also been used to develop to develop control software for the <a href="https://public.nrao.edu/">National Radio Astronomy Observatory</a>, which is where it was developed.</p>
<p>While not as widely used as C/C++, Forth is used a lot in embedded applications and has been ported to most micro-controllers. For example, the <a href="http://www.forth.com/">Forth, Inc.</a> website has downloadable examples for Arduino and the TI LaunchPad development board. The website also lists a number of <a href="http://www.forth.com/resources/apps/more-applications.html">interesting applications</a> built with Forth.</p></li>
<li><p>Forth is a <a href="http://en.wikipedia.org/wiki/Concatenative_programming_language">concatenative</a> stack based language. This makes it very different from most mainstream languages, which are based on the object oriented (e.g., Java), imperative (e.g., C), or functional (e.g., Haskell), paradigms, or hybrid versions of these (e.g., Scala or Ruby). Learning this new paradigm opens up new approaches to solving programming problems and provides a new perspective on the art of programming. The stack programming paradigm is used in the JVM byte code interpreter and in the PostScript interpreter, so getting to grips with this programming model is helpful for understanding the low level details of these widely used technologies. Due to its underlying philosophies, Forth is the most pared down and open of <a href="http://concatenative.org/wiki/view/Front%20Page">the concatenative languages</a>. </p></li>
<li><p>The <a href="http://www.forth.com/resources/evolution/index.html">history of the language</a> is interesting. For example, one of the first <a href="http://www.forth.org/KittPeakForthPrimer.pd">Forth primers</a> was written by <a href="http://en.wikipedia.org/wiki/W._Richard_Stevens">W. Richard Stevens</a>.</p></li>
<li><p>Forth is an excellent language for interacting directly with hardware and exploring the features of hardware. For too long in my career I have been able to get away without knowing much about the underlying hardware that my code runs on. With the rise of the Internet of Things this is a handicap. Understanding of hardware and how to code on it efficiently will become more important. The hardware to software interface is becoming more fluid and that is where Forth lives, so it is ideal for exploring the trade-offs.</p></li>
<li><p>The primary reason I want to learn Forth is that it challenges conventional programming wisdom. Conventional wisdom says hardware can be abstracted away completely behind multiple layers of abstraction. With Forth it is one layer away. This does mean you can cause damage, like accidentally frying the rx/tx GPIO pins on your Raspberry Pi, to pick a <em>totally random</em> example. However, it also allows for very small and efficient code. Conventional wisdom says you should always use libraries and not reinvent the wheel. The philosophy of Forth says that you are not going to need most of the library and it probably won't meet all the requirements of your application anyway, so writing your own version should be considered. Additionally, how well do you know how the library works and what its tradeoffs are if you haven't tried to implement it. It's these little heresies that point out how much of programming wisdom is taken for granted. </p></li>
</ul>
<p>I am using the following resources to learn Forth:</p>
<ul><li><p>Starting Forth, by Leo Brodie. This book is unfortunately out of print, but can be found <a href="http://www.forth.com/starting-forth/index.html">online here</a>. This is a beginners introductory book, but, looking at the table of contents, it seems to sneak in some advanced topics, like metaprogramming, near the end. </p></li>
<li><p><a href="http://thinking-forth.sourceforge.net/">Thinking Forth</a>, again by Leo Brodie. I have already read this and it is the best book I have ever read on how to decompose a programming problem and how to structure the solution code. I'll be reading it again after I write a significant amount of Forth code.</p></li>
<li><P><a href="https://www.gnu.org/software/gforth/">GForth</a>: this is the main Forth implementation I'll be using.</p></li>
<li><p>Pi Jones Forth: this is a very bare bones Forth implementation that runs, bare metal, on a Raspberry Pi.</p></li>
</ul>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-11099254101031395102014-11-23T21:44:00.000+00:002016-05-10T13:09:52.822+01:00TIL: ARM Has Java Bytecode Execution in Hardware<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicl4VTX7qCjf8BIATORcauQBlhQ8jJQEe5ZQrZQ2iwTahX9otm7SgXgZEypazu4SPPVb51fS-2AUt5P5MqhTbk2lGOuiViKhDMcg2m3gUsOOtGMtSzfUKQKWPUG94QIUHmdhMQZO_i56Q/s1600/arm_cpuinfo.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicl4VTX7qCjf8BIATORcauQBlhQ8jJQEe5ZQrZQ2iwTahX9otm7SgXgZEypazu4SPPVb51fS-2AUt5P5MqhTbk2lGOuiViKhDMcg2m3gUsOOtGMtSzfUKQKWPUG94QIUHmdhMQZO_i56Q/s320/arm_cpuinfo.png" /></a></div><p>I recently purchased a Raspberry Pi. While poking around in <code>/proc</code> I discovered that <i>java</i> is one of the features of the ARM processor in the Pi. It turns out that some ARM models have Java bytecode instructions <a href="http://en.wikipedia.org/wiki/Jazelle">implemented in hardware</a>. </p>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-66983277593805969362014-11-19T13:53:00.000+00:002014-11-19T13:53:15.677+00:00On Writing Well <p>I like doing things that have a body of theory behind them. For example, I prefer taijiquan to kickboxing, as a martial art, as it has deeper theory. So when I started blogging, I went looking for its theoretical foundations. I found them in the principles of good nonfiction writing. I bought two books: <a href="http://www.amazon.com/exec/obidos/tg/detail/-/020530902X/qid=1116369508/sr=8-1/ref=pd_csp_1/102-3326596-9462549?v=glance&s=books&n=507846">The Elements of Style</a>, by William Strunk Jr. and E. B. White, and <a href="http://www.amazon.com/exec/obidos/ASIN/0060006641/qid=1116369343/sr=2-1/ref=pd_bbs_b_2_1/102-3326596-9462549">On Writing Well</a>, by William Zinsser. </p>
<p>The book is unusual for a writing guide. Firstly, it is a good read. It is actually hard to put down. Advice on writing is given clearly and simply. That is part of it. However, Mr. Zinsser illustrates his points with personal anecdotes, and this is what makes the book so interesting. In the chapter entitled A Writer's Decisions, for example, he uses an account of a trip he took to Timbuktu. He walks us through the article he wrote, paragraph by paragraph, explaining what he wrote and what he was thinking at the time. Between the travel piece and its explanation, you get an idea of the author's personality. He's interesting. It makes his book interesting.</p>
<p> He is passionate about the craft of writing, and that comes through in a few humorous digs at bad writing. For example:</p>
<blockquote>He or she may think "sanguine" and "sanguinary" mean the same thing, but the difference is a bloody big one.</blockquote>
<p>Humour livens up the advice too:</p>
<blockquote>Don't get caught holding a bag full of abstract nouns. You'll sink to the bottom of the lake and never be seen again.</blockquote>
<p>Secondly, the book covers more than just grammar and rules for composition. It covers the whole craft of writing nonfiction. There is a section on forms of nonfiction writing, such as travel writing, sports writing, biographies, and business writing. There are a few paragraphs on the relationship between an author and an editor. This is useful information for a professional, and interesting for an amateur blogger like myself. There is a chapter on interviewing people too. It explains how to conduct an interview, how to quote people, and the ethical responsibility that a writer has to be faithful when using a quotation. The author also explains why you would want to quote someone in the first place. He uses quotations effectively himself, and these make his point very clear. The author includes a story about an article he wrote about Mount Rushmore. Instead of describing the place himself, he interviewed the people that worked there. I cannot think of a more evocative way of describing Mount Rushmore than one of the quotations he got:</p>
<blockquote>"In the afternoon when the sunlight throws shadows into that socket," one of the rangers, Fred Banks, said, "you feel that the eyes of those four men are looking right at you, no matter where you move. They're peering right into your <i>mind</i>, wondering what you're thinking, making you feel guilty: 'Are you doing your part?'"</blockquote>
<p>In short, On Writing Well is an informative book. It covers the whole craft of non fiction writing in about 300 pages and it is written well.</p>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5208889869407022153.post-43925694825618478622014-11-17T21:36:00.000+00:002014-11-17T21:52:11.769+00:00First Post<p>Hello and welcome to the inane ramblings of an Irish software developer.</p>
<p>The title of the blog comes from Lewis Carroll's, <i>Through the Looking Glass</i>. In the book, Alice goes running with the Red Queen, but they don't seem to make any progress. Alice remarks on this, saying, "Well in our country, you'd generally get to somewhere else - if you ran very fast for a long time as we've been doing." The Red Queen replies, "A slow sort of country. Now, here, you see, it takes all the running you can do, to stay in the same place." The <a href="http://pespmc1.vub.ac.be/REDQUEEN.html">Red Queen Effect</a> is quite applicable to the software industry, and as I probably will be talking quite a bit about the software industry, I thought it would be a good name for a blog.</p>
<p>I have a few objectives for my new blog. By writing here, I hope to learn how to write well. That is, I hope to learn how to write clearly and concisely, and be interesting at the same time. I also hope that this blog will become a good professional advertisement for me - something that says, "Yup. That guy is a decent programmer."</p>
<p>Specific things I hope to talk about here will include my favourite programming languages, good books that I have read, and interesting things that I have learned. I hope you find something of value here.</p>
Unknownnoreply@blogger.com