“I don't really trust a sane person.” - Lyle Alzado

Pic 3

Hooovahh's Blog

CAN 12 - XNet Signal Min Max With Reset

09/15/2018 07:08 PM

So the previous CAN Blog Part 11 was a simple and quick one to write.  It had a very specific use, and a very simple example, and didn't take weeks of preparation, drafting, and coding.  I'm going to continue that trend of a quick and simple examples with this post which describes a way of getting a signals running minimum and maximum using XNet.  In the last post we talked about the Signal Single Point, and how it was probably the most common XNet session type and that it had an issue with when communication dropped out, causing reads to return the same previous value.  Well I have another commonly requested features of the Single Point in and that is a Minimum and Maximum for a signal.


So in the following example I'm going to show some code that allows for displaying CAN signals in a table.  This table will have the N selected signals as rows in the table, with three columns.  The first column will show the last read value for each signal, which has a hold feature, and the second and third columns will show the signals running minimum and maximum.  This pretty simple UI has a reset button, so that the minimums and maximums will all be reset, clearing the history.  And the last value will have a hold of 2 seconds.  This means if the data for a CAN signals is lost for less than 2 second the table will look normal.  Losing the signal for more than 2 seconds sets the three columns to NaN (not a number).  If the signal comes back the last read value will read the new data, but the minimum and maximum will still show NaN.

When you would use this

Often times a user of some software wants to see the current value of a signal.  But they can't be expected to sit there and watch a long test to see if the data does anything weird.  In most cases monitoring the minimum and maximum a signal has ever been is good enough.  Adding the drop out functionality can also be useful because it can tell us if communication was lost at some point and then restored.

How its done

It's pretty simple really.  Using XNet we read a signal as XY.  This will return every value a signal has had over time.  Using this we figure out the running minimum and maximum and store them in something.  In my case I threw them into a feedback node within a subVI.  This is stored in a type of look up table using Variant Attributes.  It can be done several ways.

Here is the main VI:

And here is the subVI that handles keeping the history:

These two VIs have been saved in 2017 and can be downloaded here.

CAN Part 11 - XNet Signal Drop Out and Triggering

08/24/2018 04:57 PM

The most common XNet session type I've seen used is probably Signal Single Point.  In some testing environments you may want to know the time between every frame, but in most cases knowing what the newest value is, is acceptable.  For instance you might want to know the temperature of an ECU by reading a CAN signal sent by the ECU.  Do you need to get the message telling you the ECU is at 25 degrees C, 100 times a second?  Probably not.  And does it matter if one frame comes in a few milliseconds after it was expected to?  Also probably not.  We just expect this signal to change very slowly, and maybe we react if it goes above or below a threshold.  Single Point is great for these kinds of signals.  Just get me the last value for the signal, and I'll react to it, or log it as needed, knowing I might be missing some values between reads.

Signal Single Point Flaw

So since the most common type of session I've run into is Single Point, I've looked at a few ways to make this session type even better.  One flaw of the single point session read type, is that you can't know how long ago the value came in.  When you perform a read on the frame type it returns a timestamp in the cluster that is the frame.  But with the Signal type all you get is a double that corresponds to the engineering units of that signal.  So we might perform an XNet read and be told the ECU is at 25 degrees C, but what we can't know from that read alone, is how long ago the ECU told us that.  In an extreme situation, if after the first successful read the DB9 becomes disconnected from the XNet hardware, then every read performed after would still return 25 degrees C.  Dispite the fact that there has been no CAN traffic for some time.  This is because the Single Point read has no timeout.  It just holds the last read value until a new value replaces it.  If a new value never comes, then the old value is always returned.

This type of Signal Single Input flaw can make for buggy looking user interface if all you do is perform a read and display it.  A user might see values when the ECU is off, or disconnected for instance.  Contrast this with CANalyzer which will show the engineering units of the last read value, but next to the signal is a rotating ball that stops if CAN traffic stops.  This is a visual indicator that the last read value was there at one point, but that new data isn't coming in anymore.  There are a couple of techniques to be able to detect this signal drop out with XNet, but the one that I find the most efficient is to utilize the trigger feature of the XNet device.


When creating most XNet sessions, a Signal List, or Frame List must be provided.  This is an array of the XNet Frame, or XNet Signal data types.  This data type is basically a string with some extra useful features.  All string manipulation functions operate on these data types as if they were strings, and the XNet Create Session subVIs also accept an array of strings.  I mention this because this allows for programmatically creating the list of signals or frames to read or write.  This also has the option for making strings that normally don't show up in the drop downs when you click the constant or control.  One such signal or frame type is the Trigger type.  This is a Frame or Signal reading function that will return a 0 or a 1, if a particular Signal or Frame has been updated since the last read.  The format for this string is the following:

:trigger:.<Frame Name>.<Signal Name>


:trigger:.<Frame Name>

Here is a quick example of how to create a trigger Signal, for every actual Signal intended to be read:

What we see here is a list of 3 signals to read on the left.  This goes through a For Loop, and generates the 3 corresponding trigger signals and creates a new session for these 6 Signals, with the trigger signals coming after the value signals.  A read will now return the value of the Signal, followed by the Signals Trigger Signal.

Holding Values

Looking at the rest of the example VI from above you'll notice that the rest of this VI now allows for the Signal In Single Point session type to have a hold function.  Where it will hold the value of a set of given signals for some amount of time; in this case 2 seconds.  At which point the signal's value will be set to NaN (not a number).  It does this by reading the trigger and if it is 1, then we know that signal has a new value.  If it is 0 then we should keep monitoring this signal, and if after two seconds the trigger still doesn't come then we can react.  Another technique I've tried to get this same result, would be to use the Signal XY session type, then read all the values, and throw away all of them except for the latest one.  If there are no new values to read, then we know that signal hasn't been sent, and to set the value to NaN if it continues for 2 seconds.  This also works but I found there was a decent amount more CPU and memory usage when trying to read all values and all times for a large list of signals, only to throw away all the old values.  This triggering technique works well and seems to be the most efficient way to get what I want.

Using this technique we can now show the last read values of a signal, but if no new data has come in then the values will be set to NaN to indicate this loss of communication.  Ideally we would do something like CANalyzer and still show the last value, but have a new indication that the data might not be accurate since the signal is gone.  But I've found most of the time it doesn't really matter what the last reading was, it is just important to show that it is gone.  Still someone could write a wrapper around the Signal Single Point read function where it returns an array of values, and an array of booleans indicating of the values have reached their timeout and are no longer valid.

Part 12

Having XNet signal as a single point, but tracking signal drop out is pretty handy.  But one extra feature users might find useful is keep track of the minimum and maximum values of a signal.  In Part 12 I discuss ways of doing all of using by using the XNet XY session type.

CAN Part 10 - Running Code on XNet Hardware (continued CRC and Counter)

08/20/2018 04:40 PM

NI-CAN hardware was useful enough when it first came out.  You could read and write CAN frames, and with some hardware read and write CAN signals, and that was about all that was asked for from developers.  But as time went on NI saw the need to come up with a more flexible set of API drivers, and I'm guessing they thought this would be a good opportunity for a hardware refresh too.  This is where XNet came in and while I've already covered several topics on XNet, I know of one new topic that hasn't been covered anywhere because it is an undocumented, and incomplete feature going back to the the LabVIEW 8.6 era.  And I'm going to peel back just a bit of the curtain here in the hopes that others will find it useful and possibly so NI will put efforts into making it an official feature.

...So What Is It?

Okay so let me drop what seemed like a bomb shell to me, when I first heard about it.  All XNet hardware (as far as I know) has the ability to run arbitrary code that can be loaded at run-time from within LabVIEW.  What this means is XNet hardware has the ability to function as it always does, reading and writing data and working within the confines of the XNet API.  But in addition to that, it has the ability to load compiled binaries, and run code on specific events.


Well I don't want to go into all of the details but lets just say there are lots of restrictions.  The code that can be ran is not LabVIEW, it is texted based code, compiled to the specific microcontroller on the XNet hardware.  In addition to that, the events that code are ran on are pretty limited.  The code that you write does not run periodically and has pretty limited access to global information.  

I was hoping an entire protocol like ISO 15765, or XCP, could be loaded on the hardware so that the host software wouldn't need to handle that, but that isn't currently possible.  I also thought it might be useful to have the hardware always reply to a specific frame with a specific frame.  But again the limitations at the moment mean this isn't possible.

...So What Good Is It?

At the moment, the only real thing I see that is useful with this functionality, is to perform an action, just before a frame is to be sent out.  This means just before a frame is to be sent out, an action can be done.  The most common thing I can think of is to modify the payload of the frame that is to be sent out, but I think other things could be done like preventing a frame from going out.  This event is triggered when the hardware is preparing to send the frame, and not necessarily when the XNet Write function is called.  This means a periodic frame that is sent out every 10ms, will call the custom code every 10ms, after the first XNet Write call.

Great How Does It Work?

Okay so remember when I said this was undocumented?  Well this is where I start holding back on what I know.  NI unofficially said I could post this information publicly, if I didn't go into the whole process of how to setup the tool chain, write code, compile code, deploy it, and open up the properties that allow this to work.  The reason for this is because this functionality is relatively old, and not many in NI know it exists, let alone how to support it.  If a flood of people start calling NI for support on tool chain issues, and debugging deploying code to hardware that shouldn't be deployed to, then NI will regret ever allowing me to talk about this publicly.  

Uh...So What Can You Tell Us?

Here is the good news.  Even though I can't show how to setup code, and compile it, I can share already compiled code, and demonstrate how it works for a couple of common uses.

CRCs and Counters Again?

Yes...again.  So back in Part 9 we talk about CRCs and Counter signals inside a payload of a frame.  Here we talk about how some devices require a counter to change value with every frame being sent, and we discuss the bucket filling problem, which ensures the values are correct, but rely on the hardware to send them at the specified rate.  Well this is the perfect application to our custom XNet code.  Take a look at this example, which is part of the CAN XNet Tools package, found in the CAN Drivers download:

So here we see that we setup the frame out single point session like normal.  Except we have an initialize and configure function that I wrote which is called before starting the session.  These two VIs are password protected (by NI's request) but I can describe what is in them, since it isn't very helpful.  The configure function takes a premade binary that NI has generated, which tells the XNet how to perform incrementing counters, and CRCs.  This binary is downloaded to the XNet using a hidden property node, and it then sends some config information.  The second subVI just sends down more config information using hidden property nodes.  The secret sauce in all of this is really the binary which can't be modified without the whole tool chain and source.

So after running these two functions, and starting the interface, a frame will be sent out every 10ms with a new value for the counter, and a new value for the CRC.  In the demo the counter is the first byte of the payload, and the CRC is the last byte in the payload.  The write function just updates the values of the other 6 bytes in the payload.  You can download the source and run it yourself.  I didn't make a package for this and am just uploading a zip with the VIs in it that are needed. Download can be found here.  The only dependency is on LabVIEW 2015 or newer, and you need the XNet drivers installed.

Part 11...

Now that we've gotten through most of the large topics I hope to step back and show some more specific examples of programming techniques I've used in the past.  The next one is a short and simple example on detecting a signal drop out, using the XNet Trigger and Single Point signal type.  The idea is that we often want to know what a signals value is, but also hold that value for some amount of time before reverting to a value that lets us know the signal hasn't been seen.  This example shows the most efficient method I know of accomplishing this. 

CAN Part 9 - CRCs and Incrementing Counters (Bucket Filling Technique in XNet)

01/21/2018 10:36 PM

As this blog series gets longer these intros become less meaningful.  The intent of the into is to summarize all the information covered in the previous blog posts, but by now we've basically crammed a year or so of practical CAN knowledge into 8 blog posts.  As a result trying to summarize it two sentences really practical.  Just know lots of CAN stuff has been talked about, and this post is going to build on top of the previous knowledge of reading and writing raw CAN frames, as well as the in depth XNet discussed in Part 6.  In this blog post we are going to cover the incrementing counter and CRC that sometimes takes place in automotive communications, as well as ways of satisfying these requirements using hardware timing on XNet devices.

Incrementing Counters

In automotive based CAN, it is not uncommon to have a rolling counter on some signals in a CAN Frame.  The purpose is to ensure the integrity of data being sent, by incrementing a value with every transmission of the frame.  Then any device on the bus can read all frames being transmitted, and can determine if the device is still talking, and still performing the task of incrementing the counter, which may give some insight into the status of the device.  If a listener on the CAN bus still sees frames coming in, but they are not incrementing, then I can know that some part of the device is still functioning and I might be able to better understand the devices status.  Here is an example of an incrementing counter:

Time 0.000 ID 0x10 Payload: 0x00 00 00 00 00 00 00 00
Time 0.010 ID 0x10 Payload: 0x00 00 00 00 00 00 00 01
Time 0.020 ID 0x10 Payload: 0x00 00 00 00 00 00 00 02
Time 0.030 ID 0x10 Payload: 0x00 00 00 00 FF 00 00 03
Time 0.040 ID 0x10 Payload: 0x00 00 00 00 FF AB 00 00
Time 0.050 ID 0x10 Payload: 0x00 00 00 00 00 AB 00 01
Time 0.060 ID 0x10 Payload: 0x00 00 00 00 FF AB 00 02

Here we see that the last byte in the payload counts from 00 to 03, and then resets back to 00.  The other bytes of the payload may change or they may not, but that last byte must be incrementing every time it is sent, and in our example that is every 10ms.

Being a listener on a CAN bus, and determining if another device is still talking isn't too difficult to do in any API.  For most APIs you can just read all frames, pull out the counter value and compare it to the previous counter value that was read.  For XNet things can be a bit easier since we can perform a buffered read on Signals or on Frames getting all values that have been sitting in the buffer.

But if we are to simulate a device on the bus, and are required to increments a value, then the timing of our software is going to be much more critical.  Most devices looking for an incrementing counter have a relatively short timeout and may go into a faulty state if our software doesn't write the incrementing values in a timely manor.  It is not uncommon to have timing requirements of calculating the new CAN frame value, and sending a new one out every 10ms.  In Windows the reliability of timing is always a struggles.  In any general purpose operating system many other application might be trying to get the CPUs to perform their work.  Having a 10ms timing requirement is possible in software timing, but will likely have lots of jitter sometimes maybe taking 20ms, or more on overloaded systems.

So what's the solution?  Well some CAN hardware have the ability to automatically retransmit CAN frames on the hardware level, which doesn't require software to continually perform functions to keep the transmission going.  When we explored the XNet API in Part 6 we saw that some of the modes like Single Point allowed us to perform the write once, and then the database would handle how often the data would actually be written.

If we use a write mode like Single Point then we can be sure the data will go out every 10ms since it is hardware timed, but what we be able to do is change the value with every frame going out.  For this level of control what we are going to need is Queued mode.

Concept of Filling a Bucket

If we want to use XNet to allow for control over every frame going out, and have the timing be hardware controlled we are going to need something like this example illustration:

At the start of the application start we want to write several frames into a buffer which will be sent one at a time by the hardware.  The amount of frames put into this buffer doesn't matter too much, we just want to ensure that the buffer doesn't reach zero elements before we are able to put more in.

At 10ms the first frame goes out, then 20ms the next then 30ms the next.  This timing of transmission is handled by the hardware, but this only works if there are still more items in the buffer to send.  If there are no more frames in the write queue then the transmission will stop.  This means as the bucket is being emptied we need to make sure we fill it back up with another write function.

Now we see that at 40ms the next frame goes out, but what our software needs to do is ensure the bucket (or queue) never gets emptied completely, and adds some more frames to be sent out.  We need to keep track internally of the last value we wrote back at Time = 0 and start writing more adding one to the last written.  In our example we last wrote 01, so the next write needs to start at 02, so we write 02, 03, 00, and 01.  This is what I call the bucket filling technique, where the frames are being written one at a time, but we add new frames to be written every so often.  The timing of our write call isn't critical at all and in our example we first added 6 frames, so we need to write more within 60ms since that is how long it will take for the queue to be emptied.

Now that we have the concept lets look at how this appears in LabVIEW

Basic XNet Bucket Example

Here is a basic example which closely follows the images from earlier.  Here we are sending 8 bytes on a periodic CAN frame, with the first byte being a counter which goes from 0 to 3 back back to 0.  Bytes 2 through 8 come from the Payload control.  The VI will add 10 frames to the queue, and then periodically check to see how many are in the queue.  If the number of pending items is less than 10 this VI adds 10 more.  One downside of this is that a value change on the Payload may not be seen on the CAN bus until the remaining pending items are sent.  There are some more advanced techniques like flushing the buffer on a value change of the other payload bytes but then there is more work to know what the counter value should be.

Payload CRC

As discussed in Part 2 the CAN frame has a very basic CRC built in which ensures that the message is received properly.  In some automotive applications an additional CRC can be added to ensure the data being received is correct and from a device that is still communicating.  In all instances I've seen this implemented this is done with the last byte of the payload.  The particular algorithm for generating the CRC can vary, but the SAE J1850 standard uses a polynomial of 0x1D, so we will use that for this example.

With this in mind we have another reason we may need to calculate the data for each frame.  If we have an incrementing counter, and we have a CRC, then the CRC is going to change with every frame sent, since the counter will change with every frame sent.  In addition to this the payload may change, and the CRC will need to change when that occurs too.  Luckily there's an example for that too.  This is very similar to the previous example except the last byte is used for the CRC, and the CRC is calculated using a VI found in the XNet Tools package which I originally found on the NI forums here.

This is pretty similar to the last example except there we take bytes 1 through 7, perform a CRC, and then use that as the last byte in the payload.

Non-XNet Solution

So lets say you don't have XNet hardware, and can't push frames into a queue that get sent out on hardware timing.  Does this mean you can't perform an incrementing counter and CRC?  No it just means that the timing needs to be done in software, and as a result a decent amount of jitter will be seen.  Your code will typically sit in a small while loop waiting for some amount of time to go by, and then send the next frame.  On Windows you can expect jitter on the order of 10ms or so.  You may get lucky and have no jitter, but because Windows is a general purpose operating system you can never guarantee timing.  If your requirement is to send out a frame every 100ms or more software timing will probably work just fine.

Part 10

Part 10 describes an undocumented, but awesome feature of the XNet hardware.  This feature allows for uploading code to the hardware and having it execute at run-time.  The only good example I have of this is to perform CRC and Counter incrementing functionality.  But what this means is this functionality can be handled completelly on the XNet hardware and requires no periodic functions to run in your LabVIEW program.  This frees up CPU resources, and makes coding easier.  This feature could have other benefits in the future if NI ever chooses to fully support it.

All Posts