Sunday, December 29, 2013

TinyAPRS Packet Throughput Simulations

I spent some time investigating how the proposed TinyAPRS protocol would work and remembered Bob (WB4APR) wrote a BASIC program to simulate channel throughput in regard to the ALOHA concept. For the TinyAPRS protocol with Power Nodes, I would like to have a Net Cycle time of around 60 seconds. Since the node count is low (e.g. 5 nodes) this should be possible. Bob's simulation assumes each node randomly transmits. For regular APRS, this is NOT the case that I have seen. The packets occur regularly at some rate (e.g. 5min, 30 mins, etc) within +/- 1 second each time. The only exceptions are mobiles that use proportional beacon mode.

I used Bob's simulation and selected only 5 nodes (stations) and a period of around 10 seconds. I ran the simulation by using DOSBox and a QBASIC on my Windows 7 system. This is really cool since he wrote this stuff over 10+ years ago and there are tools like DOSBox that can bring them back to life.

With the random simulation all nodes achieved a Net Cycle Time of 45 seconds (I was looking for about 60 seconds- good!).

I was interested in how this would work out if the nodes did not transmit randomly, but repeated at a constant interval. I had each node transmit with a slightly different rate. I wrote this simulation using BASIC256 like my other simulation. With approximately the same period I got a 100% success in 30 seconds. I plotted the packets below. The RED dots are packets that collided and the WHITE ones did not collide. Bob's simulation is more fancy, he counts each nodes good packets, etc but mine is a good exercise.

My simulation does not stop when good packets from all nodes is achieved and you can see how the packets slip in to collisions and then out again.

I plan to implement TinyAPRS with a random transmit cycle, but most likely a regular cycle would work just as well with micro-controllers since even if you started them at the exact time they would slip in an out of sync. It will also be interesting to see the effects of digipeating which will add more traffic to the channel.

Sunday, December 22, 2013

TinyAPRS for Power Node Mesh Network

I have done some experiments with the small 433 Mhz RX and  TX modules with the Arduino using the VirtualWire library and I am impressed. With just a short piece of wire as an antenna I could get coverage all over the house. These wireless modules may be suitable for the Power Node I am working on to communicate.

Above is the RX connected to a UNO Arduino with LCD displaying the packet count.

I using my Teensy 2.0 that was setup for 3 volts and the TX module still had plenty of range.

The protocol I am considering would be similar to APRS but simplified since the VirtualWire library only supports a maximum packets size of 27 bytes. This TinyAPRS would only use single byte node addresses thus only supporting a maximum of 254 nodes and would not contain call signs for node addresses like APRS but would use many of the other methods. Below are some of my ideas so far...

Saturday, November 30, 2013

Crypto Currency Experiments

I have become interested in Bitcoin lately and have been studying it. I searched around to see if there was a way to run it in a test environment and came across testnet-in-a-box. I got this environment up and running using CPUminer as a mining tool. However after running the CPUminer for a while and doing some additional reading, I became concerned about how long it would take to complete my first block. Here is the output of CPUminer running against my Litecoin environment (more on that later).

Notice that my hash performance is between 2 to 3 kiloHashes per second. I looked at a couple of the mining calculators HERE and HERE and found I would need to wait more than two weeks! This was even with the difficulty for the bitcoin environment set to 1. As you read about bitcoin mining you find it requires special hardware that can deliver millions or billions of hashes per second performance. I came across Litecoin which is based on bitcoin but has several differences, one of which mining can still be done with CPUs. Even with the difficulty in the Litecoin environment set to 0.00024414 it took about 5-7 minutes per block. This test environment begins with nothing except the Genesis Block. The coins that are mined do not mature until you processed about 120 blocks. Which for me was about 12 hours (I was also using my computer for other things so the hash rate gets impacted). After you have some coins you can do transactions and study the system to see how it reacts, etc.

The following is a simple HOWTO to get an environment like this up and running without all the research I had to do. This was done on Windows but you could do it under Linux as well or better.

A.) Get litecoin-0.6.3c-win32  HERE (32 bit version, I did not see a 64 bit version)
B.) Get pooler-cpuminer-2.3.2-win32/64 HERE (either the 32 bit or 64 bit version for windows)
C.) Get the Litecoin testnet-box ZIP file -click button on far right side "Download ZIP"

These are all ZIP packages so you just need to extract A and B into a folder of its own and extract the testnet ZIP into the Litecoin daemon folder. That process will place two folders in the daemon folder named "1" and "2". In the 1 and 2 folders is just a .conf file. Next run the following commands:

1.) Open a command window (cmd) or use Cygwin Terminal (Unix like feel on Windows).
2.) Change directory to the folder where you have Litecoin and get into the daemon folder.
3.) Type litecoind -datadir=1 -daemon and press enter. It will just sit there as it runs and the command prompt will not return.
4.) Open another command window and change directory to the same place as step 2 above.
5.) Type litecoind -datadir=2 -daemon and press enter. Notice the data directory in step 3 is "1" and in this step it is "2".
6.) Open another (the third) command widow and change directory the same place as 2 above.
7.) Type litecoind -datadir=1 getinfo then litecoind -datadir=2 getinfo

You should see something like this as output for each node:

    "version" : 60300,
    "protocolversion" : 60001,
    "walletversion" : 60000,
    "balance" : 3036.70000000,
    "blocks" : 183,
    "connections" : 1,
    "proxy" : "",
    "difficulty" : 0.00024414,
    "testnet" : true,
    "keypoololdest" : 1385777419,
    "keypoolsize" : 101,
    "paytxfee" : 0.00000000,
    "mininput" : 0.00010000,
    "errors" : ""

Your balance will be zero along with your blocks but it should look similar if all is well. By changing the datadir from 1 to 2 you are checking each node. Now that these are running you are ready to start mining.

8.) Open another command window (the fourth)
9.) Change directory to the CPUminer folder
10.) Type minerd -a scrypt -o -O testnet:testnet

If all goes well you should see an output similar to the picture at the beginning of the post. The miner will continue to output its current hash rate and when it completes a block you will see the (yay!!!) like above. Remember you will need to wait until you process 120 blocks before your coins stack up.

After that you will want to experiment with the system. That can be done using the third command window. The third command window can control your two nodes. Here is my cheat-sheet:

Stop node 1 ---> litecoind -datadir=1 stop just up arrow to re-run in window 1
Stop node 2 ---> litecoind -datadir=2 stop just up arrow to re-run in window 2
Create Addr---> litecoind -datadir=1 getnewaddress  <name> create a public address on node 1
Create Addr---> litecoind -datadir=2 getnewaddress  <name> create a public address on node 2
Send Coins  ---> litecoind -datadir=1 sendtoaddress <address> <amount> send from 1 to addr
Send Coins  ---> litecoind -datadir=2 sendtoaddress <address> <amount> send from 2 to addr
Balance        ---> litecoind -datadir=1 getbalance  will show how many coins node 1 has
Transactions ---> litecoind -datadir=1 listtransactions  will show you node 1 transactions
Find Addr    ---> litecoind -datadir=1 getaddressesbyaccount <accountname>
Help             ---> litecoind -datadir=1 help

The help output will show you the many other commands (explore!!!)

addmultisigaddress <nrequired> <'["key","key"]'> [account]
backupwallet <destination>
dumpprivkey <litecoinaddress>
encryptwallet <passphrase>
getaccount <litecoinaddress>
getaccountaddress <account>
getaddressesbyaccount <account>
getbalance [account] [minconf=1]
getblock <hash> [decompositions]
getblockhash <index>
getmemorypool [data]
getnetworkhashps [blocks]
getnewaddress [account]
getreceivedbyaccount <account> [minconf=1]
getreceivedbyaddress <litecoinaddress> [minconf=1]
gettransaction <txid> [decompositions]
getwork [data]
getworkex [data, coinbase]
help [command]
importprivkey <litecoinprivkey> [label]
listaccounts [minconf=1]
listreceivedbyaccount [minconf=1] [includeempty=false]
listreceivedbyaddress [minconf=1] [includeempty=false]
listsinceblock [blockhash] [target-confirmations]
listtransactions [account] [count=10] [from=0]
move <fromaccount> <toaccount> <amount> [minconf=1] [comment]
sendfrom <fromaccount> <tolitecoinaddress> <amount> [minconf=1] [comment] [comment-to]
sendmany <fromaccount> {address:amount,...} [minconf=1] [comment]
sendrawtx <hex string>
sendtoaddress <litecoinaddress> <amount> [comment] [comment-to]
setaccount <litecoinaddress> <account>
setgenerate <generate> [genproclimit]
setmininput <amount>
settxfee <amount>
signmessage <litecoinaddress> <message>
validateaddress <litecoinaddress>

verifymessage <litecoinaddress> <signature> <message>

As a final bit of fun, if you want to see how the GUI client works, you can use that in place of node 2. I recommend to just stop node 2 using command window 3, then with a new command window change directory to the litecoin folder and this time instead of going into the daemon folder, stay at the root where you will see the litecoin-qt.exe file, and type:

litecoin-qt -datadir=daemon/2

This will launch the GUI client for node 2. You can now send coins from here to node 1 based on the addresses you created with the getnewaddress commands. You will notice that if you stop either node the mining will fail but keep trying, once you have 2 nodes in the system it will begin mining again (2 is the minimum).

Note, it is a good idea to specify an account name every time you create a new address that way you can start up later and figure out the addresses. Example:

> litecoind -datadir=1 getnewaddress BOB1          # This creates a new address with the name BOB1

> litecoind -datadir=1 getnewaddress BOB2          # This ones name is BOB2

> litecoind -datadir=1 listaccounts                            # shows the coins in each account
    "" : 4336.70000000,
    "BOB1" : 0.00000000,
    "BOB2" : 0.00000000

> litecoind -datadir=1 getaddressesbyaccount BOB1          # This returns the public address for BOB1

> litecoind -datadir=1 getaddressesbyaccount BOB2          # This returns the public address for BOB2

Have fun!

Sunday, November 24, 2013

Distributed Power Node Simulation Session #1

My simulator for distributed power nodes has now been debugged and tested enough that I sort of trust it to do some experiments. The code is posted on SourceForge HERE. The README file has a basic explanation and with the comments in the code, it should be usable.

We will do three experiments with the simulator in this session. These experiments I believe validate the value of a multi-node power system with load sharing.

Experiment 1
Will will put a constant load on each node (different amounts),  load sharing will be turned off, and there will be no power generation (solar, etc.).

  • Node 1 Load=.5 amps
  • Node 2 Load=.15 amps
  • Node 3 Load=.10 amps
  • No Sharing
  • No Generation (no solar)
The simulator run-time is set for 7 days. Seven days should be long enough for most field exercises and to see various effects that may change on different days, etc.

This run should not be surprising since each node is on its own. Node 1 runs out of capacity in a little over a day, node 2 lasts about 4 days, and node 3 last almost 6 days. Remember, each node has 28 Ah of capacity and the simulator "drops" the load "off" if the capacity goes to 50% or less (14 Ah). This is the best practice for sealed lead acid batteries to ensure long life.

Experiment 2
Same constant loads on each node, but now load sharing will be turned ON, and there will be no power generation (solar, etc.).

  • Node 1 Load=.5 amps
  • Node 2 Load=.15 amps
  • Node 3 Load=.10 amps
  • Sharing ON
  • No Generation (no solar)
With sharing enabled we now see the benefits of the system. All the nodes last the same amount of time (about 3 days - Node 1 benefits the most). The way the the load sharing works in the simulator is that when a node's capacity drops below 21 Ah it sets a flag to request external power from another node (EXT on the trace). On a per node basis, looking at Node 1 for example, it will determine which other nodes it can get power from (in this case 2 or 3), then it will set a timer (currently set to 30 minutes, shorter and longer times will be tried in the future) and get power from that node for that period of time. Only one of these transfers can happen at a  time. Looking at the graph above on the Node 1 line, you will notice the TIMER trace is active before the other nodes because it is the most heavily loaded node and needs help first. The TIMER trace is basically a saw tooth pattern, showing the timer counting down from 30 minutes. By day two the timers are active on all nodes and the load sharing is rotating between all the nodes in a round robin sequence. The L_ON trace shows whether the load are "on" or "off" for the node. All three nodes drop load at about the same time (late in day 3), though Node 1 sputters (intermittently goes up and down) due to it's heavy load. The nodes will continue to try to request a power transfer even though all the nodes have been depleted in this experiment. The simulation sets the power transfer level proportional to its capacity (at 14 Ah the transfer is nearly zero). This is needed because the nodes may get power from another source (e.g. like the sun when it rises). We will look at that scenario next.

Experiment 3
Here we have the same constant loads on each node, sharing is turned ON, and now Node 2 has generation from solar each day. The simulation code has a solar power curve for each node based on time of day. This was pulled from real data for Los Angeles, CA area from the web site HERE. There is a hourly performance data output section and I selected a day in November to use in the code (see below).

  • Node 1 Load=.5 amps
  • Node 2 Load=.15 amps
  • Node 3 Load=.10 amps
  • Sharing ON
  • Node 2 generation (Solar)=3 amps

With node generation (solar gains) things get even more interesting. As before Node 1 being the most heavily loaded of the nodes starts transferring power first. But now since Node 2 gets solar power each day it shares the most power out to the other nodes. The XL_ON trace shows when a node is providing power and Node 2 is consistently providing power from mid-day on day one of the graph. By day 3 the cycle of power for each node repeats (by the way, the graph is divided vertically into 7 days). Since the cycle repeats we can see that none of nodes drop load and this is a sustainable condition. The key to this multi-node concept is that the system is equal to "the sum of its parts" and the nodes can be relatively small in capacity individually but when combined, the system can handle higher loads than any single node alone. 

I have learned a lot from this simulation and intend to do more experiments in future sessions. I know the simulation is not accurate to real-life in many areas but is very useful for the exploration of this idea. I hope to build components of the real hardware to validate and/or fix the model soon. The simulation has helped me realize that a simple "party line" bus may be the best and simplest way to build this system which will reduce the complexity of building a cross-bar switch and the master/slave A/B port communications. I am thinking of using RS-485 for node the communication and skip the fancy power line modem (PLM). This will require switch coordination so that the 300 VDC bus and RS-485 bus are not active at the same time, but that seems simple enough with isolation relays.

Saturday, November 16, 2013

5-way switch PCB

One of my projects in progress is a antenna rotor controller using an Arduino designed by K3NG. I did not want to use several separate switches for the UP, DOWN and LEFT, RIGHT controls and decided to try a 5-way switch. I needed a way to mount this switch in a box that will hold all the electronics so I created a PCB using ExpressPCB, I then printed the layout on paper, ironed it on the copper clad, and etched the boards. I used K7QO's method called MUPPET.

Sunday, November 3, 2013

Power Node Idea (with Distributed Generation)

Imagine a battery box device like the Juicebox MK1 or MK2 that can be connected to other Juiceboxes to sort of share the load across multiple boxes. This is sort of the concept of a DC micro-grid, but my idea is a little different. Each node has a battery, a charge controller to charge the battery from solar or wind sources and has the ability to send some if its power over long distance (up to 1 km) to other nodes. The battery would be a set of the 12 volt Sealed Lead Acid (SLA) types with a capacity ranging from 28-48 Ah. Four of the common 7 Ah SLA's only cost $10-$15 each and provide a low cost 28 Ah array.

These power nodes would be used for field use, camping and emergency situations powering DC lighting and communication gear. For emergency preparedness every ham should have a power system they can use when the AC grid is down and many turn to a gasoline generator today. These gas generator are great, however some are very loud or heavy and the good ones can be expensive. Most of the ham gear today runs nicely on 12 volt DC but these generators are setup to make stable 120 v AC and not just DC, some can charge batteries though.

In a ham radio field day situation there is typically several complete radio stations setups, usually the stations are not operated at the same time, so if each station had a power node and were connected together the capacity of one node could be shared with another and each node would not have to be too large. From a standpoint of minimal complexity, each power node would have four 12 v DC power ports (outputs), two 300 v DC ports (input or output) and a solar/wind port (input). Each of the 12 v DC ports can be disconnected under the control of a micro-controller unit (MCU). By providing this control on the 12 v DC ports, a form of load shedding can be done where the loads can be prioritized and if capacity is getting low, loads can be switched off to prevent premature capacity loss. The two 300 v DC ports can be either a input or an output, giving the node the ability to send 300 v DC on one of the ports or receive 300 v DC on one of the two ports, but can not send and receive at the same time even on different ports (more on that later). Finally, the primary method of keeping the battery charged is via the solar/wind input. This port is basically the input to a Pulse Width Modulated (PWM) or Maximum Power Point Tracking MPPT controller. These controllers can take inputs as high a 30 v DC and adjust it for the proper charging of batteries. As long as this input is a few volts higher than the maximum float voltage of the battery array the controller will handle it. In practice there is a difference between a solar controller and a wind controller. Here is a basic block diagram of the proposed system:

This system has been mostly a thought experiment, however a computer simulation has been developed that has helped me explore it deeper that I thought. The current simulation is written in Basic-256  and supports a three node cluster like below:

The simulation abstracts a lot of what will need to be worked out in hardware like the node to node communications and how the power transfers will work. What I found in the three node system is that if one node is giving power to another node then no other transfers can occur in the cluster until that one is finished due to the current design. I have determined that the communication would be easy to implement if it is a master/slave configuration. In the figure above, all the "A" ports are connected to "B" ports. So an "A" port is a master and a "B" port a slave. In the power node detail drawing there is a Power Line Modem (PLM) shown connected to the cross-bar switch. The PLM can communicate over the same 300 v DC line when it is on or off by superimposing a high frequency signal on the line. With a single PLM in the design, it will need to "listen" on port "B" much more than it transmits (masters) on port "A". The cross-bar switch will be responsible for configuring the "A" and "B" ports for either receiving or sending 300 v DC or listening or transmitting with the PLM. Another consideration is if a node "dies" then the cross-bar switch fails in a "bridge mode" where port A and B are bridged so that the cluster is still workable.

I am still testing the simulation under various condition to confirm it is valid for a three node system. When I complete the testing I will post the code.

Next steps are to build up a real charge controller and use it with a battery and instrumentation like Voltage and Current Sensor and a variable load to explore the effects of loading a battery while charging it, etc.

Saturday, November 2, 2013

High Voltage DC power Transmission Test

I decided to experimentally verify the work done by Bob WB4APR regarding his 330 Volt DC power distribution system HERE. He is using a single wire with an earth return, which I am interested in but used direct wires in my tests and then later used resistors to simulate longer wires. Below is my setup. I had a 400 watt 12v to 120v inverter and a Dell laptop power supply. I constructed the AC to DC doubler using a 470 uF and 560 uF 450 volt electrolytic capacitors, along with the 1kv diodes like Bob used in his write up.

I first connected the doubler directly to the Dell laptop power supply and verified that it worked, then I placed two 120 ohm resistors (one on the positive and the other on the return side) in the line feeding the power supply. The power supply provided a constant 19 volts regards of the line resistance. Note: you need to use a power supply with the 100-240 V AC rating. It turns out they will run on DC as well. 330 VDC is the peak voltage of 240 VAC.

I was then curious as to the efficiency of this power conversion system since it is stepping it up from 12 volts DC to 120 volts AC then doubling it to 330 volts DC, then sending it over the line to the laptop power supply and finally stepping it down to 19 volts DC.

I loaded the laptop power supply with a 25 ohm resistor which pulled about .76 amps. This is about 14 watts delivered to the load. I then took the measurements at the inverter end. It was consuming about 26 watts. So this is only about 55% efficient. I am losing almost half of my power in the conversion! Since I had the resistors in-line I figured there was loss there so I removed and remeasured. That only improved it by about 2-3 %, which is good in terms of what just the line loses are but the conversion efficiency seems low. I then thought that I needed to load the power supply heavier to get better efficiency, so I lowered the load resistor and was able to pull over 1.5 amps and the efficiency did go up to about 70%.

My conclusion is that this methods is usable and that since I was using clip leads even on the low voltage high current side and long leads for measuring current and voltage my calculations could be off a bit. I was hoping to see 80-90%  in terms of efficiency, however this is not bad when considering you can transmit this power over a 1 km distance with tiny 20 gauge wire. My intention is to use this method in a distributed generation, multi-node power system for field ham radio use like on field day, etc. I will post more details on this idea soon.

Saturday, October 26, 2013

Monday, September 9, 2013

First blog post

I needed a place to journal my projects and hopefully give back to the Internet a little of what I have taken over the years.