Python is better than C! (Or is it the other way round?)

Max Maxfield

embedded.com

If you have a quick Google for something like "Python vs. C," you will find lots of comparisons out there. Sad to relate, however, trying to work out which is the "best" language is well-nigh impossible for many reasons, not the least that it's extremely difficult to define what one means by "best" in the first place.

One aspect of all this that doesn’t seem to garner quite so much discussion is Python versus C in the context of embedded systems – especially small microcontroller (MCU)-based applications for "things" that are intended to be connected to the Internet of Things (IoT) – so that's what I'm poised to ponder here, but first...

...it's interesting to note that there are literally hundreds and hundreds of different programming languages out there. If you check this Wikipedia page, for example, you'll find 54 languages that start with the letter 'A' alone (don’t get me started on how many start with 'C" or 'P'), and this list doesn’t even include the more esoteric languages, such as Whitespace, which uses only whitespace characters (space, tab, and return), ignoring all other characters, but we digress...

The reason I'm waffling on about this here is that I just finished reading a brilliant book called Learn to Program with Minecraft by Craig Richardson (see my review). This little scamp (the book, not Craig) focuses on teaching the Python programming language, and it offers the most user-friendly, intuitive, and innovative approach I've seen for any language.

As part of my review I said: "Now, I don’t wish to wander off into the weeds debating the pros and cons of languages like C and Python here – that's a separate column in its own right." Well, I'm not outrageously surprised to discover that I was 100% correct, because this is indeed a separate column in its own right (LOL).

Now, I'm not an expert programmer by any stretch of the imagination, but I do dabble enough to be dangerous, and I think I know enough about both Python and C to be very dangerous indeed. There are myriad comparisons that can be made between these two languages; the problem, oftentimes, is working out what these comparisons actually mean. It's very common to hear that C is statically typed while Python is dynamically typed, for example, but even getting folks to agree on what these terms mean can be somewhat problematical.

Some folks might say: "A language is statically typed if the types of any variables are known at compile time; by comparison, it's dynamically typed if the types of any variables are interpreted at runtime." Others, like the folks at Cunningham & Cunningham (I can never tell those two apart), might respond that static typing actually means that "…a value is manifestly (which is not the same as at compile time) constrained with respect to the type of the value it can denote, and that the language implementation, whether it is a compiler or an interpreter, both enforces and uses these constraints as much as possible." Well, I'm glad we've cleared that up (LOL).

Another comparison we commonly hear is that C is weakly typed while Python is strongly typed. In reality, weak versus strong typing is more of a continuum than a Boolean categorization. If you read enough articles, for example, you will see C being described as both weakly and strongly typed depending on the author's point of view. Furthermore, if you accept the definition of strong typing as being "The type of a value doesn’t suddenly change," then how do you explain the fact that you can do the following in Python:

bob = 6
bob = 6.5
bob = "My name is Bob"

In reality, what we mean by "strongly typed" is that, for example, a string containing only digits (e.g., "123456") cannot magically transmogrify into a number without our performing an explicit operation to make it do so (unlike in Perl, for example). In the case of the above code snippet, all we're saying is that the variable bob can be used represent different things at different times. If we used the method type(bob) after bob = 6, then it would return int (integer); after bob = 6.5 it would return float (floating point number); and after bob = "My name is Bob" it would return str (string).

One thing on which we can all agree is that C doesn’t force you to use indentation while – putting it simplistically – Python does. This is another topic people can get really passionate about, but I personally think we can summarize things by saying that (a) If you try, you can write incredibly obfuscated C code (there's even an International Obfuscated C Code Contest – need I say more) and (b) Python forces you to use the indentation you would have (should have) used anyway, which can’t be a bad thing.

Another thing we can agree on is that C is compiled while Python is interpreted (let's not wander off into the weeds with just-in-time (JIT) compilation here). On the one hand, this means that a program in Python will typically run slower than an equivalent program in C, but this isn’t the whole story because – in many cases – that program won’t be speed/performance bound. This is especially true in the case of applications running on small MCUs, such as may be found lurking in the "things" portion of the IoT.

I feel like I've already set myself up for a sound shellacking with what I've written so far, so let's go for broke with a few images that depict the way I think of things and the way I think other people think of things, if you see what I mean. Let's start with the way I think of things prior to Python circa the late 1980s and early 1990s. At that time, I tended to visualize the programming landscape as looking something like the following:

Python is better than C! (Or is it the other way round?)
The way I used to think of things circa 1990 (Source: Max Maxfield / Embedded.com)

Again, I know that there were a lot of other languages around, but – for the purposes of this portion of our discussions – we're focusing on assembly language and C. At that time, a lot of embedded designers captured their code in assembly language. There were several reasons for this, not the least that many early MCU architectures were not geared up to obtaining optimum results from C compilers.

Next, let's turn our attention to today and consider the way I think other people tend to think of things today with respect to Python and C. Obviously processors have gotten bigger and faster across the board – plus we have multi-core processors and suchlike – but we're taking a "big picture" view here. As part of this, we might note that – generally speaking – assembly now tends to be used only in the smallest MCUs that contain only minuscule amounts of memory.

Python is better than C! (Or is it the other way round?)
The way I think other people think of things circa 2016
(Source: Max Maxfield / Embedded.com)

In turn, this leads us to the fact that C is often described as being a low-level language. The term "low-level" may seem a bit disparaging, but – in computer science – it actually refers to a programming language that provides little or no abstraction from a computer's underlying hardware architecture. By comparison, Python is often said to be a high-level language, which means that it is abstracted from the nitty-gritty details of the underlying system.

Now, it's certainly true that the C model for pointers and memory and suchlike maps closely onto typical processor architectures. It's also true that – although it does support bitwise operations – pure Python doesn’t natively allow you to do things like peek and poke MCU registers and memory locations. Having said this, if you are using Python on a MCU, then there will also be a hardware abstraction layer (HAL) that provides an interface allowing the Python application to communicate directly with the underlying hardware.

One example of Python being used in embedded systems can be found in the RF Modules from the folks at Synapse Wireless that are used to implement low-power wireless mesh networks. This deployment also provides a great basis for comparison with C-based counterparts.

In the case of a ZigBee wireless stack implemented in C, where any applications will also typically be implemented in C, for example, the stack itself can easily occupy ~100 KB of Flash memory, and then you have to consider the additional memory required for the applications (more sophisticated applications could easily push you to a more-expensive 256KB MCU). Also, you are typically going to have to compile the C-based stack in conjunction with your C-based application into one honking executable, which you then have to load into your wireless node. Furthermore, you will have to recompile your stack-application combo for each target MCU (but I'm not bitter).

By comparison, the Synapse's stack, which – like ZigBee – sits on top of the IEEE 802.15.4 physical and media access control layers, consumes only ~55KB of Flash memory, and this includes a Python virtual machine (VM). This means that if you opt to use a low-cost 128 KB MCU, you still have 73 KB free for your Python-based applications.

And, speaking of these Python-based applications, they are interpreted into bytecode, which runs on the Python virtual machine. Since each bytecode equates to between 1 and 10 machine opcodes – let's average this out at 5 – this means that your 73 KB of application memory is really sort of equivalent to 73 × 5 = 365 KB. Furthermore, the same bytecode application will run on any target MCU that's using Synapse's stack.

As part of my ponderings, I also asked my chum David Ewing about his views on the C versus Python debate. David – who is the creator of our ESC Collectible wireless mesh networked "Hello There!" Badges and "Hello There!" Robots – is the CTO over at Synapse Wireless and, unlike yours truly, he is an expert programmer. David responded as follows:

C and Python are both fantastic languages and I love them both dearly. There are of course numerous technical, syntactic, and semantic differences – static vs. dynamic typing, compiled versus interpreted, etc. – but the gist of it all is this:

  • C is a "close to the metal" compiled language. It is the "universal assembler." It is clean and elegant. My favorite quote about C is from the back of my 30-year old K&R book: "C is not a large language, and it is not well served by a large book.”
     
  • Python, with its "dynamic typing" etc., reduces "accidental complexity." Python is interpreted (or JIT compiled), so you can have silly errors that aren’t discovered until runtime. However, compilers don’t generally catch the hard, non-trivial bugs. For those, only testing will suffice; a solution must be rigorously tested, regardless of the implementation language.

David went on to say:

If a problem can be solved in Python, then it can also be solved in C; the reverse is not always true. However, if the problem can be solved in Python:

  • The solution (source code) will be simpler than the corresponding C code.
  • It will be more "readable."
  • Perhaps more importantly, it will be more "writeable" (this is an oft-overlooked quality!).

Due to the qualities noted above, the solution will have fewer bugs and be much faster to develop, and these are the real reasons to opt for Python over C for many tasks.

I'm like David (except I sport much better Hawaiian shirts) in that I appreciate the pros associated with both languages. I like the clever things you can do with pointers in C, for example, and I also appreciate the more intuitive, easy-to-use syntax of Python.

So, which language is best for embedded applications? I'm hard-pushed to say. To a large extent this depends on what you want (need) to get out of your applications. You know what I'm going to say next, don't you? What do you think about all of this? Which language do you favor between C and Python (a) in general and (b) for embedded applications? Also, if we were to widen the scope, is there another language you prefer to use in Embedded Space (where no one can hear you scream)?

embedded.com

You may have to register before you can post comments and get full access to forum.
EMS supplier