Mike Walker at Flexradio introduced us to KMTronic. He uses this device in his remote station to simplify turning equipment on and off remotely over the web. The KMTronic has a built in web page providing a simple user interface. All that is needed is a web browser and the KMTronic’s i.p. address. Priced at less than $100, it is a great solution for remote station users.
At the W0QL remote station a KMTronic has been installed to provide two backdoor access functions. One is LAN isolation between the AT&T Mobile Hotspot and the main LAN. Four relay contacts are used to electrically bridge the two LAN’s, or to isolate them. The other four contacts are used to reset the BMS’s on the four battery banks. If a BMS has tripped, the KMTronic can be accessed over the Internet and the corresponding relay can be activated that will reset the BMS remotely. This saves a site visit.
How To Use
Internally the i.p. address of the KMTronic is 192.168.1.204. It can be reached remotely by any device on ZeroTier with a network id ending in ee4.
Abstract: It has always been a goal to have “backdoor” remote access for troubleshooting. There are times when the primary Internet connection is down and normal access is not possible. It is those times when backdoor remote access saves the day. It could prevent a site visit, a trip to the site.These are the specific essential building blocks:
AT&T Mobile Hotspot
Raspberry Pi running ZeroTier, ipforward and iptables
Same subnet but separate ranges of i.p. addresses
Let’s get started: First, an explanatory overview. The hotspot provides Internet access over a different path by using the cellular data network. This specific hotspot costs $35 a month for unlimited data. T-Mobile’s $50 service would probably also work. Up and down bandwidth is 30 Mbps, even in the rural location. Luckily there is an AT&T cell tower not too far from the remote site. Not so lucky is the fact that the hotspot provides only a private i.p. address and not a public address so it cannot be reached from the outside world. Called “carrier grade NAT” or CGNAT, it is a heavy duty impenetrable firewall. Not to fear, however.
A great solution to the CGNAT problem is a product call ZeroTier which becomes the second detail of this project. ZeroTier is an application that runs on a computer behind a firewall and reaches out over the Internet to a software defined LAN. A software defined LAN is similar to the user side of a home router. Instead of the hardware connections like a home router uses, a software defined LAN does it all with algorithms and the Internet. Other computers running the same application and same credentials can reach the same software defined LAN and communicate as if they were all in the same office. For backdoor access one instance of the application is running on a computer at the remote site (a Raspberry Pi) and another instance is running on a computer (Windows 11 pc ) at the home location. Competing products exist and might also work, like Tailsscale, reverse TCP tunnelling, SoftEther, WireGuard and possibly others that do NAT traversal. ZeroTier has been the most comfortable and successful of the ones tried at this remote station.
How To Use
Any device anywhere worldwide on the same ZeroTier network can reach the LAN at the remote site. As this is written the network id ends in ee4. To reach the i3 NUC: Power is on the ‘Station’ circuit on the 4005i using port 82. The NUC i.p. on the LAN is 192.168.1.100. It can be reached using Remote Desktop Protocol. The Pi is at 192.168.1.204 and it can be reached with Putty. The KMTronic is at 192.168.1.204 and it can be reached with a browser. If the main LAN is down the only device that can be reached is the Pi. Other well known ports:
Follow the instructions on the ZeroTier web page to make an account and to create a network. Their free plan has all the features needed.
This brings us to the third detail, the Raspberry Pi computer.
A Raspberry Pi is fully capable of running the ZeroTier application and then some.
Shown above is a Raspberry Pi model 3 which is the model being used in this project. Follow the instructions on the ZeroTier web page to join the network created above. With a hotspot and a Pi running ZeroTier the hardware and some of the software to get into the site is complete but no connection has been made to the main LAN yet.
Each detail has involved challenges but probably the biggest challenge of all has been how to connect to and how to communicate with the existing LAN at the remote site. At this point there are two LAN’s, one providing a data link between the hotspot and the Pi and the other LAN providing communication for all the existing equipment. Connecting any two LAN’s requires a router, but not just any router. An ordinary home router will not do. Turns out the solution is simple and elegant thanks to the Linux operating system running on the Raspberry Pi. It can run a few built-in processes and perform the necessary router functions. A nice writeup of how to configure this routing function is published by the ZeroTier developers: “Route between ZeroTier and Physical Networks“
sudo iptables -t nat -A POSTROUTING -o $PHY_IFACE -j MASQUERADE sudo iptables -A FORWARD -i $PHY_IFACE -o $ZT_IFACE -m state –state RELATED,ESTABLISHED -j ACCEPT sudo iptables -A FORWARD -i $ZT_IFACE -o $PHY_IFACE -j ACCEPT
Another essential process is ipforwarding:
sudo sysctl -w net.ipv4.ip_forward=1
Edit /etc/sysctl.conf to uncomment net.ipv4.ip_forward. This enables forwarding at boot.
Next, take steps to avoid two devices having the same addresses on the combined LAN: On the hotspot, set the dhcp i.p. address range to the highest 50 addresses in the subnet, and make the subnet identical to the main LAN subnet. Turnoff DHCP on the hotspot. On the main LAN, set the router dhcp range to exclude the top 50 addresses and leave DHCP on.
A few items remain to polish the backdoor project. The whole idea is to be able to access the remote network at all times. There is no way to know what the source of the failure might be. It could be power down inside the remote station. In that case the backdoor needs to have it’s own power. For that reason, the hotspot and Pi have their own battery and solar panel separate from the rest. Considering the main LAN goes through a big ethernet switch and that switch could be down, the hotspot and Pi have their own switch. That small switch is also powered by the separate battery. Rebooting devices remotely is invaluable. Some devices, like computers, can be rebooted with software commands or they might need a hardware reset. Other devices, like BMS’s and EMC’s require a hardware reset. Relays wired to provide the hardware reset, controlled over the Internet through the backdoor can save a trip to the site. At this site relays are wired to short out the BMS’s (which is how they are reset if they have tripped). Another bank of relays is installed to reset the EMC’s if they lock up ( like they have been prone to do ). Almost all equipment has a method of being rebooted or reset remotely.
A successful backdoor access project provides a lot of comfort knowing the every day remote operation has tools for a better chance of recovery when something goes wrong.
Thoughts for future improvements – One improvement could be to move all the non-radio equipment to the secondary Internet connection, leaving the entire bandwidth of the main connection to the radio. That would be easy because the hardware connections are already in place. It would just be a matter of changing the i.p. settings on each piece of equipment to static with the gateway address of the secondary connection. A second idea is to combine the two Internet connections into what is called “dual-WAN” service. A product exists to do this easily (according to the sales literature). It is called Speedify and is worth checking out someday.
One additional thought. Use bridging instead of routing to see if bridging would pass the broadcast packets. What this means is the packets that advertise a service are being blocked (by the hotspot??) when using routing. It is possible bridging would fix this. The hotspot would not see the packet headers and thus not know any particular packet was a broadcast packet.
One more additional thought. Use iptables “mangle” to create a mangle table which will be a MSS filter. Set the filter size to, in turn, create an MTU size that will pass through the PPPoE Internet connection at the radio end.
Here is an example of a line of code to create the mangle table:
iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1452
Forty meters: Following a QST article from July, 1972, The W2FMI 20-Meter Vertical Beam, we scaled to 40 meters. This is a three element yagi made of three quarter-wave verticals. Gain is less than a dipole but better than a single vertical. It will be aimed toward Europe.
The pattern was very narrow right through central Europe, like a knife edge.
Three days after completion a wind storm blew down both the director and the reflector. The driven element survived and was used for a time as a quarter wave vertical. Life happens. Inspired by the performance of this antenna, a 160 meter version was completed in 2021. A possibility also exists of resurrecting the 40 meter yagi. A 30 foot tower with a tri-band beam now resides on the site. Perfect for an omega match tuned to 40 meters. The old director and reflector are still around and could be re-installed. Radials were relocated so they would have to be re-installed. Capacitors for the omega match are in the junk box. If the weather stays nice this fall, this might be a good late fall project. DXCC is already accomplished on 40 but the DXCC Challenge always needs new QSO’s.
Were you imagining a horizontal yagi at 270 feet in the air? No way. It is actually a vertical yagi, meaning three 160m vertical antennas are in a line and spaced a typical distance apart for a yagi. Only the center vertical is driven and the other two verticals are a director and a reflector. The concept for this antenna came from an article in QST, W2FMI 20-Meter Vertical Beam, June, 1972, p 14, by Dr. Jerry Sevick. This is an autumn 2021 project with a goal of working the final 34 countries needed for 160 meter DXCC. Orientation is toward Europe.
The only vertical that didn’t already exist is the new reflector shown in the foreground. It is 50 feet tall. The tower with the beam on top is doing double duty. The beam is being used as a top hat for 160m. The tower becomes the 160m driven element thanks to Omega matching. Faintly visible in front of the tower is the director, which is a 43 ft. vertical with a top hat, resonated to 160m.
What distinguishes the yagi from a phased array is how it is driven. In a phased array all three verticals would be driven at certain phase angles and magnitudes with phasing cables and a phasing network. A phased array has more gain but is very complicated to implement. This yagi has only one driven element, no phasing cables, and is quite forgiving as to spacing. Yagi elements can be spaced for maximum gain or can be spaced for best front-to-back ratio. These yagi elements are spaced for gain using .2 wavelength. The reflector is resonated 5% lower in frequency than the driven element and the director is 5% higher. Center frequency is 1.840 MHz with a 2:1 bandwidth from 1.8 MHz to 1.885. The yagi will be ready for the winter 160m DX season.
A loading coil is housed in this enclosure at the base of the reflector. The coil is a roller inductor adjusted to resonate at 1744KHz, which is 5% lower than the driven element resonance of 1840 KHz.
At the base of the director (the other element) is an identical loading coil, resonated to a frequency 5% higher, or shorter, than the driven element. The director resonates at 1930KHz, shown below. SWR is irrelevant because it is not being connected to a feedline. It is connected directly to the radial ground screen.
In the pskreporter screen-shot below notice that the strongest reports (look at the “dB” numbers) are in a line to the northeast of the Colorado QTH which is very good news running barefoot at 100 watts.
Fall 2022 update: Directivity is not being seen like it was on the 40 meter vertical yagis. After doing more reading, the lack of directivity may be caused by using short verticals. The 40 meter yagi was using full size quarter wave verticals. On 160, that is impossible with the given resources. The idea of a yagi on 160 meters may get put on the shelf in favor of using the tower as a single radiator.
Using a NanoVNA to measure the resonant frequency of a trap from a hf antenna.
This post is using a trap from a Hustler Model 6BTV. The entire manual is at this link:
Note this caution from page 45: “You must adjust each trap with the antenna completely assembled – traps cannot be adjusted before assembly.” Therefore the readings taken in the top picture are useless. The picture should be titled, “How NOT to measure a trap”.
The unmounted components from the top picture have been mounted in a small enclosure:
The resistors are all 120 ohm, 1/4 watt or smaller wired as shown here. A goal of this arrangement is to present 50 ohms to the NanoVNA but at the same time the trap is loaded as little as possible.
In practice this method is good for determining if a trap is defective — are turns shorted, for example. For tuning a trap vertical, adjusting the trap by sliding the sleeve up or down and measuring the SWR of the antenna in place is much more effective .
Using PSKReporter, stations were being spotted on 6 meters in Colorado that were not being copied by this station. That was incentive to switch from stacked halo antennas to a yagi to hear better. Being heard by others is not the problem. PSKReporter shows spots everywhere in the country when the band is open plus there is an amplifier that can be turned on any time. The problem is hearing those last 5 states needed for Worked All States. It was a quick swap out of antennas. The coax and rotator were already in place so it was just a matter of taking down the halos and putting up the yagi — done in a day. Cushcraft makes a very inexpensive 5 element antenna that is a good choice for a trial. Don’t you think it makes a pretty stack?
Performance results to follow.
Update – May 31, 2021: Worked 2 of the 5 needed states so far. It works! Still need DE, AK, and HI.
Update – July 15, 2021: Got all 3 remaining states and now have WAS on 6 meters! Yay!
For a long time there have been multiple signals on this remote base that appear to be digital hash and not legitimate radio signals. On the water fall they look like noise from switching power supplies. Considerable work has been done trying to get these signals chased down. Over the last year each switching power supply has been replaced with a linear supply or the switching power supply has been mounted in a metal box with ferrite chokes on the leads. Since the noise continued, looking elsewhere was necessary. The next suspects are the solar controllers considering they switch power on and off rapidly just like a switching power supply and considering they are about the only devices that haven’t been investigated. Searching the web turned up numerous reports that solar controllers are major contributors of rfi. The controllers used at the remote site* are specifically selected because of their FCC Class B certifications. They aren’t supposed to be generating rfi. That’s why they haven’t been investigated earlier. Today’s testing was very revealing. The controllers are generating tremendous rfi. Later it was discovered the interference occurred only in the mode where the batteries are fully charged. The controllers are in a state of “high voltage disconnect” to avoid overcharging the LifePO4 batteries. When the system is in a charge state there is no interference. Below is a picture of a water fall on 17 meters on a sunny day when the solar system is generating full capacity in a “high voltage disconnect” status.
Obviously those big wide bands of yellow-green are not supposed to be there. They are digital hash caused by something. Their huge signal strength indicates the source is probably local. Next picture is with one of the four controllers turned off. Observe the band on the right and the band in the center have disappeared as the waterfall continues to scroll down. Two bands on the left are still present.
Next, another of the controllers is turned off revealing an amazingly rfi free band. What a stunning difference. Apparently the other two controllers are not generating hash, for some reason yet to be determined.
Toroid chokes on the controller wires should be an easy fix. A hand full of Mix 31 ferrite toroid chokes was placed on the wires that come in and out of the controllers and no noticeable change occurred. Paraphrasing the captain of the boat in the movie Jaws, “We’re going to need a bigger choke”. Upon more Web scouring back home, an article was found that discussed a rarely mentioned bit of information about ferrite chokes.
“Ferrite material choking performance degrades in the presence of strong DC current. For this reason, it is better to pass both DC wires from the solar panels through the same snap on ferrite as this will eliminate the DC bias in the core.”
The chokes had been placed on individual wires in the initial test. About 15 amps of DC was present on those wires. Is this DC current enough to degrade the performance of the chokes? On the next trip to the site, both wires will be placed through the cores and the results will be reported here.
*The controllers used at the remote site are Morningstar PWM ProStar PS-30 and Morningstar MPPT ProStar PS-MPPT-25M.
Chokes On Both Wires Together
Next site visit and the first thing noticed is that different controllers are causing interference than the ones that caused it last time. Here is the first picture upon walking in the door without any testing.
Two lines of digital hash coming down the waterfall are from two of the four controllers, but not the same ones as last time. Next picture is after turning off three controllers and at the 7 second mark placing a choke on both wires of the 4th controller.
The choke clears up a good amount of noise but not nearly all of it. More chokes were added and there was almost no more improvement. Chokes don’t seem to be the answer.
Next topic is why only two controllers at a time cause interference. What is the difference? PWM and MPPT controllers are both contributing equally. It was soon noticed that the interference is coming from the controllers where the batteries are fully charged. When a battery is not fully charged and the controller is working hard there is no interference. When a battery reaches it charged state and the controller stops charging, it starts generating the digital hash. Solutions come to mind both elegant and crude. An elegant solution would be to monitor the modbus data output and watch for the fully charged messages. Use a microcontroller like an Arduino to turnoff the controller. That sounds like a lot of coding and debugging and time spent. Turning to the crude solution, that would be a relay on the solar input cables driven by a voltage sensor on the battery. When the battery reaches full voltage the relay would open and effectively turn off the controller. Call this solution the Rube Goldberg, band-aid, patchwork-quilt solution but voltage sensors and relays are now on order from China. The interference will have to be lived with for a month until the parts arrive.
While waiting for the parts from China an article surfaced that suggested trying 4 turns of both wires through one toroid of mix 31. That was tried and it did not reduce the noise noticeably.
In an act of desperation bypass relays were inserted in the solar panel input leads so each of the panels could be cut off completely if they were causing interference. This is the method referred to above as “Rube Goldberg”. The difference is the relays are controlled remotely from home over the Internet instead of by an Arduino monitoring the modbus or instead of a voltage detector. So far it works perfectly. Case closed. For now.
Update February, 2023. Along came Node Red and big improvements have been made. Go To
When LifePO4 batteries are located in an unheated outdoor equipment shed in climates like Colorado their winter temperature can fall below freezing quite often. LifePO4 batteries will be damaged if they are charged when they are colder than freezing. A couple of uninviting options exist. First, the shed can be insulated and heated, which could be a lot of work and expensive. Second, the batteries could just not be charged until the temperature warms up. Even on a sunny day that typically means around noon and leaves only time for a partial charge on short winter days. A third option appears to be the least painful and that is to provide some external source of heat directly to the batteries through the use of heaters. There doesn’t seem to be any product marketed specifically as a LifePO4 battery heater. Researching alternatives, one possibility is the silicone heaters used to warm the bed of 3D printers. It is flat, comes in various shapes, voltages, power ratings, and it is inexpensive. A sampling was ordered and tried out. Finally selected is a 20 watt 12 volt heater shown below.
These pads fit nicely between alternating cells so each cell is adjacent to one heater. Leads are brought out and connected in parallel with wire nuts. Each heater draws 1.5 amps and in the lineup below 4 heaters draw 6 amps.
Getting this far was the easy part. Figuring out how to power the heaters is the next challenge. It was quickly learned that using the batteries themselves was a net negative. Heaters use too much power. The batteries don’t get fully charged before the sun goes down. An external set of batteries was tried but that just shifted the problem. After a few days the external batteries don’t have enough charge to run the heaters. Another failure was the use of timers to only turn on the heaters right before the sun came up. A new idea was needed. Time for …
While the batteries are too cold to charge and the heaters are running, the solar cells are sitting idle wasting generated power. Why not use that solar power to run the heaters? Duh. This idea was tried and has been working successfully for several cold winter months. Power was tapped where the solar panels go into the solar controllers. The tap is the small red and black wires in the picture below.
Raw voltage from the panels is typically 20 volts and that might burn out the heaters. A 10 amp buck converter was inserted in the line to keep the voltage at 12 volts, one buck converter for each of the battery banks. A metal box limits the rfi emitted from the digital buck converters.
W1711 thermostats round out the installation. These little guys are set for 5 degrees Celsius which allows some margin to make sure the batteries are kept above freezing when the sun is up. When the sun isn’t up there is no concern because there is no solar power available to damage the batteries. What happens when there is solar power but the batteries haven’t warmed up above freezing? The Morningstar controllers were specifically chosen because of their feature called “low temperature fold back”. Even when there is solar generation, if the batteries are below freezing the Morningstar controller will refuse to charge the battery.
The vertical tested here is a DX Engineering DXE-MBVE-5A 43 foot vertical. Radials are four pieces of welded wire fencing each 25 feet long laid flat and terminated on a DX Engineering DXE-RADP-3 radial plate. The fencing is 48 inches wide. A RigExpert model AA-55 was used to make the measurements. Each band was tested, 160 meters through 10 meters, except 12 meters. Here are the results including the (poor) snapshots of the AA-55 screens.
|Z| = 506.9 ohms (notice the R component is only 11.8 ohms)
SWR = infinity
|Z| = 216.6
SWR = infinity
|Z| = 58.6 ohms
SWR = 3.5
The 60 meter frequency of 5357 kHz is very close to the resonance of a 43 foot vertical. A dip at 5957 confirms the expectation.
A quarter wave vertical with a perfect ground system should have an impedance of 36 ohms. For curiosity the AA-55 was adjusted to the antenna’s resonance at 5957. Here is what this antenna measures:
|Z| = 45.7 ohms
SWR = 1.10
This reading of 45.7 ohms indicates a ground loss of 9.7 ohms (45.7 – 36 = 9.7) or approximately 10 ohms. This value agrees with the amateur literature for a typical ground system. One example is Phil Salas, AD5X’s presentation on The 43-Foot Vertical : “Assume 10 ohms of ground loss — Probably a much better ground than most hams have”. The efficiency calculation in the AD5X presentation should match the vertical in today’s test very closely. AD5X calculates 78%. For every 100 watts delivered to the antenna 78 watts is radiated.
An idea for improving this blog post would be to test a 43 Footer over a better radial system for comparison.
|Z| = 131.9 ohms
SWR = 4.8
|Z| = 636 ohms
SWR = 12.77
|Z| = 227.7 ohms
SWR = 17.03
|Z| = 102.7 ohms
SWR = 2.93
Notice another dip. This one at 17180 is the third harmonic of the fundamental frequency of 5957 kHz.
|Z| = 385.3 ohms
SWR = 7.8
|Z| = 61.2 ohms
SWR = 1.23
Matching 30 meters should be the most difficult at 636 ohms but that’s well within the range of most automatic tuners. An additional challenge should be 160 and 80 meters with their infinite swr’s. One of many good tuners to use as an example is the MFJ 998RT. It is specified to handle impedances from 12 to 1600 ohms and swr’s up to 32:1. In practice with this model of tuner installed on this 43 foot vertical it matches beautifully on 80 thru 10 but not on 160, maybe because the R component is only 11.8 ohms on 160. Optional coil and relay kits are available to add 160 meters. No matching problems have been noticed on 30 or 80.
A note of caution. Just because an antenna matches does not guarantee it is getting out possibly due to objects nearby or due to radiation patterns on each band. It may match perfectly at 10 meters but all of the energy is straight up to the clouds, with only a little radiation at low angles.
On the other hand antennas with a poor match still can make contacts with even a small amount of power being radiated, although inefficiently.
Insolation is a big word meaning how much sunshine is there? That’s an interesting bit of information when one is trying to keep batteries charged with solar panels. It’s just a cross check to see if the charge amperage is consistent with the amount of sunshine each day.
The project consists of a photo cell and an Arduino-emulation device called a ESP32. The hardware looks like this. Very minimalist. The breadboard is just to hold the ESP32 in place. A USB cable brings in 5 volt power. The round disc is the photocell.
The ESP32 connects to the Internet over wifi and uploads data every 10 seconds using the protocol MQTT, “the standard for IoT messaging” . Data consist of the resistance of the photo cell. A server processes the data and provides a web page GUI. The server is called a broker and in this case the broker is provided free for personal use by Adafruit. The ESP32 is also a product of Adafruit. The ESP32 cost $20 at Microcenter.
Below is a screenshot of the GUI page, putting it all together.
Ideas for the next version: Mount the ESP32 inside a solar powered yard light and eliminate the USB cable. Disconnect the light and power the ESP32 instead.
This solar powered led yard light was chosen at random and it was chosen for it’s reasonable price. When it arrived it looked like this:
Opening it up revealed a pleasant surprise which had not been mentioned in the sales description. It has an actual 18650 lifepo4 battery. Perfect. This battery should power a ESP32 for many hours. The ESP32 draws 100ma at 5 volts which is one half watt. The 18650 is rated at 4.4 watt-hours (4.4 watts for an hour). That would be 4.4/.5 or 8.8 hours. In reality that time would be extended by the ESP32 going into sleep mode when it’s not sending data. It would never need to send data constantly for 8.8 hours.
Unfortunately the controller board that comes with the unit will have to be discarded because it doesn’t have the features needed for the ESP32.
Will the ESP32 fit inside the waterproof cabinet? Looks like it will.
In fact, a LORA board will fit very nicely, too, and that can come in useful for the next project, building a LORA network.
Reading up on how to power a ESP32 from a solar yard light has revealed some challenges but also solutions. First, the cell voltage is 3.7 as can be seen in one the pictures above. The ESP32 needs either 5 volts or 3.3 volts, neither of which is close to 3.7 volts. What is needed is either a boost converter to get up to 5 volts or a buck converter to get down to 3.3 volts. The battery voltage of 3.7 is nominal. The voltage can vary from 4.7 to 3.2. When it’s 3.7 or above the buck converter works fine but when the voltage drops below 3.7 the buck converter shuts down. That rules out the 3.3 volt option. Looking at the 5 volt option, there is a possible solution. Connect a standard charge controller between the solar panel and the battery such as the TP4056 Charging Module 5V Micro USB 1A 18650 Lithium Battery Charging Board with Protection (5 pieces for $5.95 on Amazon) which looks like this. It’s output will vary with the voltage of the battery.
Boost converters exist ($7.29 for 5 pieces on Amazon) that will provide a constant output of 5 volts with an input as low as 1 volt or as high as 5 volts and look like this.
The concept is the charging module will regulate the solar input to keep the battery properly charged. As the battery charges and discharges the output voltage will vary. The boost converter will take that varying voltage as input and it will output a constant 5 volts.
Moving on to the next step, those parts will be ordered today. Total additional cost $2.86 per unit.