These tanks are awfully tippy when connected to a Little Torch. Not so much anymore.
The performance of my satellite downlink station is nowhere near as good as I would like. I been doing some various troubleshooting measures to try and figure out what is going on. In my previous post, you can see some of my explorations about antenna tuning. In this post I took my first pass at exploring the feedline situation. It’s been up for at least five years and wanted to make sure nothing untoward had happened to it.
To help measure it, I ran a TDR test on the feedline. This is not a perfect measurement as I left the antenna attached, which messes up some things. But it gives you the opportunity to measure where impedance changes occur in the feedline. Here’s what the resulting plot looks like:
At first I was terribly confused thinking my feedline was terribly broken. It’s important to note that the current NanoVNA-Saver software doesn’t present TDR the way most people expect. It stacks the impedances of the various segments, so you shouldn’t read that big flat segment in the middle as being at 90 ohms.
Breaking it down by segment I get the following: short stub that adapts SMA to UHF connector; 10 feet of LMR 400 equivalent; a big impedance jump as the signal travels through a 3 inch barrel connector through a door; 30 feet of LMR 400 outside; another barrel connector; 10 more feet of LMR 400; then the antenna.
What did I learn in the end? It looks like my various lengths of cable are fine. Most importantly I learned that UHF connectors have a surge impedance of around 35 ohms instead of the 50 ohms we are looking for. That causes reflections and increases the return loss of your feedline. Wikipedia article on the impedance of UHF connectors: https://en.wikipedia.org/wiki/UHF_connector
So where does that leave me?
This showed I didn’t have any problems in my feed lines other than the ones I caused myself by using UHF connectors and patching together shorter lengths of cable. My next steps to evaluate the system performance will be to measure the SWR and return loss of just the feedline without the antenna connected. This means I’ll replace the antenna with a 50 ohm calibration standard to measure return loss; then replace it with a dead short to measure insertion loss. More detail here: https://www.tek.com/blog/improving-vna-measurement-accuracy-quality-cables-and-adapters
I’ve been having a strong desire to replace every UHF connector with a Type N connector. These measurements are an attempt for me to quantify what, if any, practical improvement I would see from that change. I fear that my real problem is that I live in a city and there is just far too much noise around me.
For my satellite ground station I built an antenna from the plans by WA5VJB http://www.wa5vjb.com/references/Cheap%20Antennas-LEOs.pdf
I originally built a 7 element 70cm antenna. But then added on a two element 2m one. While I had tuned the 70cm antenna, it was tuned a little too low after the second cut to shorten the driven element. But it passed, just barely.
While attaching the new antenna to the old, I got frustrated smacking the darn thing into everything and hacked off the end, converting it to a 5 element in short order. With years of experience under my belt I have learned that anger is not a great engineering design philosophy.
I didn’t check the tuning before mounting it up here.
Lets take a look at what the NanoVNA Says. This is the return loss chart for 2 m frequency.
While the tuning is a little low for what I want, it’s close enough to work. For example most satellites are around 145 MHz.
Unfortunately this is awful. I need tuning closer to 435 MHz, this bad boy is coming in at 415 MHz. Looks like it’s time to take out the plumbers torch and solder some more wire back onto the antenna :).
I’ve been using Pi-Hole for a while and it just caused too many problems. Many shopping carts across the web would just fail. I need to run google ad campaigns and you can’t get to the admin UI. Therefore I decided to move dns resolving back to my ER-X and instead use cloudflared to resolve DNS queries so CenturyLink has a harder time selling my browsing history.
I found a great post on how to do this at: https://reichley.tech/dns-over-https-edgerouter-x/ but it doesn’t cover the v2 series of EdgeOS based on Debian 9. These are my quick notes on changes to their directions.
When you install a new update of EdgeOS, it overwrites all the default partitions such as /usr. Therefore I decided to store my files in
/config/user-data which is an area that persists between system updates.
On machine used to upload:
scp cloudflared user@erx:/config/user-data/cloudflared
I also decided to store a copy of config.yml in this directory before copying it over to
/etc/cloudflared/config.yml. That way after an upgrade I have less work to do.
EdgeOS v2 uses systemd instead of init.d for startup.
sudo cp /config/user-data/cloudflared/config.yml /etc/cloudflared/yml sudo
/config/user-data/cloudflared/cloudflared service installsudo vi /etc/systemd/system/multi-user.target.wants/cloudflared.service
Modify line to include pid info (not sure if we need this with systemd)
ExecStart=/config/user-data/cloudflared/cloudflared --config /etc/cloudflared/config.yml --origincert /etc/cloudflared/cert.pem --pidfile /var/run/$name.pid --no-autoupdate
sudo systemctl enable cloudflared.service sudo systemctl start cloudflared.service systemctl status cloudflared.service
Also /usr/sbin wasn’t in my path, so I had to call tcpdump directly.
/usr/sbin/tcpdump -nXi eth0 port 443 and dst host 188.8.131.52
After an upgrade I should only need to do re-run a few of the steps above.
I’ve been slowly working towards building a satellite uplink and downlink station for years. I have finally reached a point where I finished building the station and have Receive some signals.
The performance is rather abysmal, I still have a fair amount of work to do.
Over the years I’ve generally avoided Excel. Being a programmer, I could just pick up python and write code to do what I needed, I didn’t need to hack something together in Excel. But I always ended up back there for the charting.
Then I learned R and have even more reason to avoid Excel.
Recently I needed to implement date based cohorts in SiteCatalyst. While there are a few blog posts on how to do this in Excel using Report Builder (http://adam.webanalyticsdemystified.com/2013/03/07/conducting-cohort-analysis-with-adobe-sitecatalyst/ , http://blogs.adobe.com/digitalmarketing/mobile/what-is-mobile-cohort-analysis-and-why-should-you-use-it/) they didn’t work for me. My team is all on MacOS, and Report Builder isn’t.
In this example I’m going to use events tracked by the Mobile Library lifecycle stats. One plus of this solution is it doesn’t require any SAINT classifiers to convert mobileinstalldate to a month/year.
The idea here is you use QueueTrended to chunk together uniqueusers by month, with mobileinstalldate as the counted event. If you look at the data output from QueueTrended is makes more sense. The rest is then using plyr and reshape2 to beat the data into the form we want. It works just fine with segments.
I’m not sharing my code that generates percentages yet because I’m not particularly happy with it yet. Drop me a line if you are interested.
And yes, the data is small, this is from a private unreleased product I am working on.
First off, my apologies to actual Decision Scientists. I have no formal training and just recently learned that the area I’m fascinated by actually has a name.
There are a lot of anecdotes out there about how wonderful all the different Lean Startup methodologies are. If you go and read the Amazon book reviews, you’ll see lots of comments about how it changed someones life.
What you won’t see is any data showing they actually help.
I’m a skeptic at heart. Studies on businesses are notoriously difficult to do, and while I could see value in the tools and techniques, I really wanted a deeper scientific basis for all the hype.
I finally managed to find it within the Decision Science literature. Daniel Kahneman has done a great survey of key ideas and concepts in “Thinking, Fast and Slow”. It’s a long book, but for the purposes of this discussion we really care about part 3: Overconfidence. “The Signal and the Noise” by Nate Silver also talks a fair amount about our inability to do forecasts well. Lastly, “Naked Statistics” provides another view.
The Lean Startup Methodologies (LSM) are designed to help you answer two questions:
1) Should I even bother building a full product?
2) If I do build the product, should I continue with incremental changes, or pitch the whole thing and start over?
As an entrepreneur with a product or business idea you are going be subject to three different cognitive illusions. These are things that are common to all of humanity, and there is little you can do to train them away. The three illusions I see as being the most problematic are Optimism Bias, Domain Expertise Confidence; Confidence in Prediction in an unstable environment.
Humans in general tend to think they perform better then average. In particular, entrepreneurs tend to be even more optimistic then the general population about their ability to beat the odds. For instance the base rate of failure for a new business is 65%. But the vast bulk of entrepreneurs consider their chance of failure at 40%. This delusion can help us make it through the day, but it can also cause you to stick with something for far too long. (Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (p. 256). Farrar, Straus and Giroux. Kindle Edition. )
Domain Expert Overconfidence
Everyone is over-confident about their ability to predict the future. Doesn’t matter how much of an expert you are in your field, you are overconfident. Time and time again we see research that shows a crappy mathematical model that is informed by expert opinion is almost always better than either one alone. (http://www.rff.org/Events/Documents/Lin_Bier.pdf, all of “The Signal and the Noise”).
The second part here is apropos to a problem I’m wrangling with at work. I’m working on a product that is all about market creation. By definition, my ability to research and learn about my market is terribly limited since the market doesn’t exist. Learning the Lean Startup tools improves your metacognitive ability to see your own weakness in expertise and therefore adapt to it. (http://gagne.homedns.org/~tgagne/contrib/unskilled.html)
Confidence of Prediction
When you ask yourself “Do I need this feature in my product?”, the question you are really asking is “Will adding this feature to my product add enough value to my business in the future to justify the (opportunity) cost now?”. That is attempting to forecast the future, something humans can be good or bad at depending on how quickly they get feedback about their decisions. Firefighters and nurses, who get almost instantaneous feedback on the quality of their forecasting are able to build a great amount of skill in this area. Those of us running in longer time scales fall prey to building confidence in our predictions, but not actually improving our accuracy. Think of how much that sucks for a moment. You can be in a field for years, making forecasts, assuming you are getting better, but in reality, you aren’t. (Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (p. 240). Farrar, Straus and Giroux. Kindle Edition.)
Essentially it all boils down to the fact that you are going to feel really confident about your idea and plans, but that confidence is not based on actual hard data. It is instead an illusion being fed on how we perceive and think about the world. You can’t use your gut ‘Confidence’ check to know if you really have a product or not at the get go, we’re just not wired that way.
But by acknowledging our limitations, we can figure out ways to work around them.
So how do you deal with this trifecta of Overconfidence in your idea?
To quote Steve Blank: “GET OUT OF THE BUILDING” and go talk to customers.
All the different LSM out there have different suggestions on how to go about defining a market segment, finding customers, and how to talk to them. But it boils down to talking directly to customers to counteract your overconfidence. They all reccomend a progressive approach in how you do your research. You start with wide open investigational interviews, and as you learn more (and validate or invalidate your overconfident beliefs) you start using more structured interviews to gather more accurate data (but less wide ranging) including things like surveys and prototypes. You can even run A/B value prop testing with a mock web site and google adwords.
That said, you need to be cautious about over-generalizing your results. We humans love to see patterns and over value things we see and measure, even if they have low confidence.
(Patrick Leach (2006-09-15). Why Can’t You Just Give Me The Number? (Kindle Locations 1911). Probabilistic. Kindle Edition. )
Launching – Answering the “Persevere or Pivot” question
These pre-launch experiments have a whole range of cost, accuracy, and specificity. On the low end you have informal unstructured interviews. This is great for proving things our early on, and also allowing you to find a better business idea then the one you thought of originally. On the high end of cost and complexity, you have large scale polling that you do right (i.e. random distribution of customers, proper question wording to avoid bias, etc). These aren’t going to find you that better business idea, but can provide a very accurate and specific answer to the question you pose.
At some point though (and this will be very specific to you and your circumstances) you’ll need to sit back and decide to stop pre-launch experments. When do you stop? When failing in market will be better then continuing to run experiments. For example, if you have a large existing business (lets say $100M in annual revenue) that you are thinking of disrupting with a new revenue model, you probably want to do the more formal methods since spending $50,000 on a polling firm and taking the time is small relative to the risk. But a 6 person startup with no actual revenue yet? Depends on the size of your market. If you only have 100 customers, you may want to do more upfront work because running experiments on customers can cause them to get grumpy and leave. But if you have a larger target market, it’s fine to lose some customers while you work things out. If you are looking for a more decision science based approach, see Chapters 11 and 12 in Patrick Leach (2006-09-15). Why Can’t You Just Give Me The Number?. The different market sizes account for the different approaches in the LSM books.
Once you get a product in market, you are still subject to the same overconfidence illusions around forecasting. This is where the second part of the LSM stuff kicks in: Analytics and Rapid iteration.
I’m totally thrilled to watch market after market get disrupted by rapid prototyping. On the hardware side we had FPGA’s come along in the 90’s that allowed really interesting products to be built without the capital outlay needed for an ASIC. On the SaaS side, AWS/DevOps/Harware as software movement has added nimbless to that field. Outside of computing, the revolution around rapid prototyping, 3D printing, and cheap CNC tools (like CNC plywood routers) has drastically changed things. Even the repatriation of hard goods manufacturing is occuring because it allows businesses to iterate faster (http://www.nytimes.com/2011/10/13/business/smallbusiness/bringing-manufacturing-back-to-the-united-states.html?pagewanted=all ).
How can overconfidence get you after launch? Go read the opening chapters of “The Startup Owners Manual” to learn about how Webvan’s overconfidence caused them to ignore the metrics they were getting and fail big.
The steps at this point are:
1) Ship iteration of business (this includes ad copy, market segment, marketing webite and materials, actual product)
2) Observe behavior using quantitative metrics
3) Use that to drive qualitative discussions with customers
4) Make a hypothesis and modify product/web site/ad copy
It’s easy to get analytics wrong. Eric Ries labeled these ‘Vanity Metrics’. These are metrics that are pretty much guaranteed to give you the answer you want (generally up and to the right). But much like qualitative interviews, there is a broad spectrum of accuracy and complexity around implementation. For that first launch you don’t need much. Just a retention chart that is keyed off the activity that drives your engine of growth is enough to shake your confidence. You are looking for analytics that help you detect the huge problems in your overconfident assumptions. You aren’t at the point where you care about 3% improvement in a number or running a linear regression on your data.
Don’t know what metrics to track? Grab a copy of Lean Analytics (http://leananalyticsbook.com/). They breakdown a large number of different business models and what you should be looking at to decide if you should throw in the towel or not.
How quick should your iterations be? As quick as possible without pissing off your customers or partners. For instance, if you are growing rapidly you should iterate quickly (daily even?). As a portion of your customer base, those people irritated by all the change will always be a shrinking proportion of your total base since you are getting new customers at a very fast rate. I personally (overconfidentaly and untested of course) think you need to be willing to lose your early customers and therefore shouldn’t worry about them.
One last moment of reflection. These are all really cool tools. But if you’ve already decided on a course of action, the value of any new information may be zero since it will not change your mind. In that case, just make your decision and go on. I like these tools (in particular stochastic modeling), but in all honesty, if you crack open Bayesian theory and run the numbers, they only help increase your odds of a good outcome by a small amount. This is due to the huge amount of raw luck and chance that exists in the world. A lot of this is outside of our scope of control (I feel for everyone who launched a new business right before the great recession).
So have fun and enjoy yourself!
“The Lean Startup”
“Thinking, Fast and Slow”
“The Signal and the Noise”
“Why Can’t You Just Give Me The Number? …Guide to using Probabilistic Thinking to Manage Risk and to Make Better Decisions”
“The Startup Owners Manual”
Back in my original post about my electric bike conversion I mentioned that I had CFS, aka Chronic Fatigue Syndrome. CFS is a poorly defined health condition that in my opinion actually covers a number of very different health conditions. I’m glad I never really accepted it as a diagnosis from the rheumatologist that mentioned it to me, but instead kept scouring my life and health for anything that could impact my energy level.
I’m happy to say I no longer suffer from it.
While it was a slow slide, it hit me hard in the fall of 2008. I just had my first big release as a new engineering manager; planned a wedding and got married; and I started flight training that summer. By the time the wedding rolled around I was beat in a way I had never been tired before. No amount of coffee or sleep would help.
It took me 4 years to figure out all the contributing factors and rehabilitate myself. During that time I met with 7 different doctors trying to get a handle on all the different causes that would contribute to being run down.
I can be a tenacious bastard when I have a goal in sight.
At one point I was working with an endocrinologist and still couldn’t handle aerobic exercise. 10 minutes with my heart rate above 105 bpm and I would need to sleep a couple of hours later. I had to ask him “Do you have any more ideas, is there anything else _you_ can do for me now?” after a long pause he admitted no.
Which was a good thing because then I went searching for more answers. In the end I had to address the following things to get my energy back:
- Sleep Apnea
- Stress Management
- Sleep Quality
- Vitamin D levels (I was near the level that causes rickets)
- Stress response to exercise