Armour39 Heart Rate Monitor Review

UPDATE: April 2015. DO NOT BUY. Device still does not work, even with a new iPhone. App has never been updated in the App Store.

UPDATE: Nov 24, 2014. DO NOT BUY. Have had continuous problems with this device not pairing and it has been less than 3 months.
Customer service was helpful for the first 3 calls, then basically told me on the 4th call that I was out of luck and should wait for a future (unscheduled) app store update that, if I’m lucky, will fix the issues. I have never had so many problems, and cannot believe they did not offer to replace it, but upon my asking directly, they did not think a replacement would fix the issues. DO NOT BUY.

The Under Armour Armour39 Heart Rate Monitor is an excellent new addition to the heart rate monitor market. It consists of a chest strap and transmitter which pairs with an app on your iPhone or iPod Touch (or can pair with a watch purchased separately). Read More

Debugging Access-Control-Allow-Origin issues in Apache

Here’s how I was able to solve & debug a problem relating to a security issue that prevented iPad and Android devices from loading some of the files on my site, due to the fact that they were served by a different port. Client side (browser) tools did not provide the level of insight I needed to debug the server issue.

The problem:

I was running Django server on a port (8000) with Apache serving static files from port 80. On iPad and Android devices, my JWPlayer 6 video player was giving users an error that my custom skin file could not be found. The skin file was simply an xml file being served by Apache on the same machine, and it worked just fine on laptops, desktops, Mac and PC, but not mobile, and not tablets.

I found the underlying client problem first with Mac iOS Simulator and Safari Developer tools (and later by simply spoofing the Android user agent from my Chrome browser and checking Console errors). The error was:

XMLHttpRequest cannot load http://localhost.myhost.com/static/inc/jwsk/glow_gb.xml. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost.myhost.com:8000' is therefore not allowed access.

I found a number of reports on the issue, including this Stack Overflow post. To correct my problem, I needed to setup CORS (Cross-Origin Resource Sharing) via some Apache directives.

I verified that the loosest setting in Apache did indeed fix the problem, so I was on the right track:


<FilesMatch ".xml$">
    Header set Access-Control-Allow-Origin *
</FilesMatch>


However I wanted stricter security in place, to not allow any old hostname to match. Based on some misleading statements in the StackOverflow thread, I believed I could use multiple lines for this Response Header, using environment variables to generate different combinations of host & port that I wanted to allow. I tried dozens of combinations unsuccessfully and struggled to figure out how to debug the apache behavior. Finally I found
this post (http://archive.ianwinter.co.uk/2010/11/18/log-response-headers-in-apache/) which got me headed in the right direction.

Debugging the issue

I added the following lines to my apache conf file, to emit to my logs both the environment variables I was setting & request/response headers relevant to the issue:

#capture server & host from request origin
SetEnvIf Origin "^(.*.?myhost.com)(:[0-9]+)?$" ORIGIN_0=$0 ORIGIN_1=$1 ORIGIN_2=$2
# add variations of incoming host/port to the response headers
<FilesMatch ".xml$">
    Header set Access-Control-Allow-Origin "%{ORIGIN_0}e" env=ORIGIN_0
    Header set Access-Control-Allow-Origin "%{ORIGIN_1}e" env=ORIGIN_1
</FilesMatch>

#emit request "Origin" header, response "Access-Control-Allow-Origin” header, and my 3 environmental variables to each line of the log
LogFormat "%h %l %u %t "%r" %>s %b ORIG="%{Origin}i" ALLOW="%{Access-Control-Allow-Origin}o" O0="%{ORIGIN_0}e" O1="%{ORIGIN_1}e" O2="%{ORIGIN_2}e"” common2

#use my custom log format 
CustomLog /var/log/django/prototype_access.log common2

After restarting apache, my apache logs now showed output like this when I reloaded an affected page:

127.0.0.1 - - [15/Mar/2014:13:41:26 -0700] "GET /static/inc/i/logo2.png HTTP/1.1" 304 - ORIG="-" ALLOW="-" O0="-" O1="-" O2="-"
127.0.0.1 - - [15/Mar/2014:13:41:26 -0700] "GET /static/inc/i/unlimited.png HTTP/1.1" 304 - ORIG="-" ALLOW="-" O0="-" O1="-" O2="-"
127.0.0.1 - - [15/Mar/2014:13:41:26 -0700] "GET /static/inc/jwsk/glow_gb.xml HTTP/1.1" 304 - ORIG="http://localhost.myhost.com:8000" ALLOW="http://localhost.myhost.com:8000"  O0="http://localhost.myhost.com:8000" O1="http://localhost.myhost.com" O2=":8000"

Eureka.

The Solution

I could verify via apache logs that my environmental variables were working as expected. My problem was that only one of my Access-Control-Allow-Origin headers was taking effect, and not the right one. I needed to Allow origins without the Django port.
Once I removed the extra lines, I was left with this configuration, which solved my problem by enabling the Apache host (without the django port) in a single response header directive.


SetEnvIf Origin "^(.*.?myhost.com)(:[0-9]+)?$" ORIGIN_1=$1 
<FilesMatch ".xml$">
    Header set Access-Control-Allow-Origin "%{ORIGIN_1}e" env=ORIGIN_1
</FilesMatch>

My final apache logs looked like this:

127.0.0.1 - - [15/Mar/2014:13:41:26 -0700] "GET /static/inc/jwsk/glow_gb.xml HTTP/1.1" 304 - ORIG="http://localhost.myhost.com:8000" ALLOW="http://localhost.myhost.com" O0="http://localhost.myhost.com:8000" O1="http://localhost.myhost.com" O2=":8000"

Robot Vacuums (Neato)

June 2014 Update:

I have grown to adore my Neato.
Visitors comment on how clean my floor is. I simply hand-vacuum the edges of the room once a month, to get the areas the Neato can’t. I walk barefoot around the house all day, and don’t have to wipe my feet to get in bed. It runs 3 times a week to keep things spic and span, and only gets stuck once every few weeks. I’ve gotten into the habit of keeping my phone charging cords off the floor, and for a few bucks, I purchased a 10-ft cord organizer for all the cables running behind the couch. The Neato avoids it. I even carried my Neato up to the attic and let it suck up the dust and guck that only an attic can collect. That was a perfect job for a robot!

Feeling so far: It could be quieter, but it does a heck of a job. If you don’t have the luxury of a house cleaner and can clear your clutter enough for a robot to navigate the floors, the Neato is a winner.

Dec 2013:

This Christmas, I finally got something I’ve wanted for years,LED Desk Lamps a robotic vacuum. My dog leaves a literal trail of hair in her wake akin to PigPen’s dustcloud, so this bot has its work cut out for it. On a friend’s recommendation (about the quality of the cleaning algorithm), I got the Neato (XV-14) instead of the more well-known Roomba.

Day 1. Gratification and disappointment

Wow, it’s a lot louder than I was expecting. The dog doesn’t love it, but she tolerates it. It moves SO slowly (~15 minutes for a 10×10 room with almost no obstacles, about an hour for 4 small rooms).

It doesn’t clean the edges of the room, nor behind/around things, which is where a lot of the dog hair tends to collect. Even so, it filled its dirtbin twice on day 1, so that was very gratifying. Quite amusing to see it climb my digital scale, push the dog bowl and even a chair. Ran out of juice once requiring a recharge, and made a bit of a scrunched up mess of my lightweight kitchen rugs, though it managed to not get stuck on them.

Feeling so far: Mostly disappointment that robot vacuums aren’t further evolved. I wish Apple made one.

Day 2. Learning boundaries and eating cords

Today I set up the auto-schedule feature so it would run mid-day on a different floor of the house (I work from home), to be away from the noise.

  1. Within a few minutes of auto-starting, I heard the Neato’s oddly pleasant bleet for help. It had sucked up the end of my new iPhone charger and stopped. Stripped a bit of plastic from the wire but didn’t kill the cord thankfully. I cleared the obstacle and vacuuming resumed.
  2. A few minutes later, another bleet as the vacuum had managed to get stuck on a different cord. Never realized how many cords I had laying around. Cleared that too.
  3. A few minutes later, it got stuck under the couch next to a bunch of cables (but not having eaten them), complaining its vision was blocked. Though I moved it to an open space, the error code would not reset on its own until I put it back on its dock. That cleared the error state right away.
  4. At some point it ran out of power 6 inches away from its dock and stopped there. I manually redocked it.
Feeling so far: Robot vacuums are fun to watch. But you really have to clean up in preparation for them (moving cords, obstacles, etc) and that’s extra work.

Day3. Skipped

Didn’t schedule it to run. Kinda missed it.

Day4. Stops and starts

Today I was smart and moved everything out of the way ahead of time, cables, obstacles, etc. The vacuum started on its own schedule, which was great.

  1. I heard it stop after 10 minutes or so and found that it had redocked and claimed it was finished. My still visibly dirty floor and the short run period were evidence it had skipped most of the floor, so I manually restarted it.
  2. A few minutes later, I heard it shut off again. This time it ran out of juice but made it back to its dock for recharging. They say it takes a few charges to hold its full charge. Later it restarted on its own.
  3. Stopped shortly thereafter again. The display reported that the brush was stuck. Apparently my long hair had wrapped around it and knotted. This required a quick (and easy to find) Youtube lesson on how to remove and take apart the brush, which only took a few minutes and wasn’t hard. Given my hair, I’m sure I’ll need to do this on a regular basis. The error code wouldn’t clear on its own, so I redocked it and that cleared it.

Did a decent job cleaning today and it was satisfying to empty the half-full waste collector.

Feeling so far: There is not much that’s truly automatic about this vacuum yet. It requires constant babysitting for stops and starts. My floor looks cleaner than usual though.

The deceased live on LinkedIn

In the relatively young landscape of social media, one awkward area has to do with the accounts of the deceased on LinkedIn.

I now have 2 friends on LinkedIn who have died unexpectedly young. Both are still active on LinkedIn, and show up as still working at their last job. I dread the day that I am prompted to Congratulate one of them on their n-year anniversary at their ‘current’ employer. As time goes on, the number of dead on social networks will inevitably grow. And grow.

There’s a part of me that rejoices in seeing my friends’ names again, having a way to visit their pages and feel the idle connection come alive. But on the heels of that feeling comes the inevitable knife-in-the-gut, the remembrance of the loss: a cruel, insensitive reminder.

LinkedIn provides a way for members to report accounts of the deceased to initiate account shutdown. Generally, I would hope that a family member would get to make this decision, and not a colleague.

These accounts are living epitaphs. Miss you guys.

AWS Command Line Interface (aws cli) Tips

Today I started playing with the new Amazon Web Services command line interface tools to issue aws commands from my console and scripts.

Installation was straightforward, but I realized right away I needed to set the target region for my commands, so I decided to use the custom config file approach, setting an env variable AWS_CONFIG_FILE to point to my config file path.

The aws cli tools are not very well documented yet, payday loans online and there are multiple obsolete versions of the docs floating around as well, so here are a few quick corrections.

Regarding your AWS config file:

1. You must explicitly prefix named 3d virtual reality config sections with “profile”, e.g. [profile oregon], not [oregon].

If you do not, an otherwise valid config file does not work, yielding this error.

A client error (InvalidLocationConstraint) occurred: The specified location-constraint is not valid

Here’s a valid config file:

[profile oregon]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
#oregon
region = us-west-2

[profile norcal]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
#n. california
region = us-west-1

2. Inline comments are not supported in the config file, only full-line comments.

If you do use an inline comment in your config (as one of their examples does), you may see the error I saw:

A client error (InvalidLocationConstraint) occurred: The specified location-constraint is not valid

So this inline region comment is invalid:

[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2 #oregon
 

3. There is no fallback to env variables if you skip variables in the config.

Even though I set AWS_ACCESS_KEY and AWS_SECRET_KEY into my environment, I get error

Unable to locate credentials

So here’s another bad file.

[default]
region = us-west-2 

#missing aws_access_key_id 
#missing aws_secret_access_key

Setting up SSL on Mountain Lion (Mac OSX 10.8)

Note: For my mac dev box, I load my custom Apache configurations in a standalone file, e.g. /etc/apache2/other/mysites.conf. I tend to leave the default apache files in place for reference. You may prefer to edit or replace the default files instead.

I started by reading this set of instructions, but had to workaround a few other issues to get https working.  Here are the full steps I used.

Steps

1. Per my last post about setting up https in production, generate a   private key this way (if you don’t have the one for production):

$ openssl req -nodes -newkey rsa:2048 -keyout mysslprivatekey.key -out localhost_mycompany.csr
Generating a 2048 bit RSA private key
.............................................+++
.......................+++
writing new private key to 'mysslprivatekey.key'
-----
You are about to be asked to enter  iphone 5 refurbished information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []:MyTown
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:www.mycompany.com
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
    

Note I didn’t set a password, email, or anything optional.

2. With that private key, create a local certificate. (Note that I use the hostname localhost.mycompany.com for my browser testing in development.  I have an entry in my /etc/hosts file pointing this name to 127.0.0.1. Using the FQDN for localhost allows access to full cookie functionality.)

$ openssl req -new -x509 -key mysslprivatekey.key -out localhost_mycompany_com.crt -days 1095
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []:MyTown
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:Common Name (e.g. server FQDN or YOUR name) []:localhost.mycompany.com
Email Address []:

3. Copy the *.csr, *.key, and *.crt file to a directory for apache, creating it if doesn’t exist:

sudo mkdir /etc/apache/ssl
sudo cp *.csr *.key *.crt /etc/apache/ssl

4. I had some trouble getting port 443 working. I could see it wasn’t listening when I ran the command

netstat -nap tcp|grep 443

When everything is working correctly, you should see something like this line in your netstat output:

tcp46      0     *.443      *.*       LISTEN 

I tried a number of things, including verifying that I had no Firewall turned on (System Preferences > Security & Privacy > Firewall). In the end, all I needed to do was uncomment the following Include line in /etc/apache2/httpd.conf

Include /private/etc/apache2/extra/httpd-ssl.conf

Verify that mod_ssl is being loaded by this line in the file:

LoadModule ssl_module libexec/apache2/mod_ssl.so

5. Verify the following critical items in /etc/apache2/extra/httpd-ssl.conf or in your configuration flow.

# this is the line needed to get your server listening on port 443, and see it in netstat

Listen 443

# comment out the default lines which point to non-existent files, and insert your own real paths:

#SSLCertificateFile "/private/etc/apache2/server-dsa.crt"
#SSLCertificateKeyFile "/private/etc/apache2/server-dsa.key"
SSLCertificateFile "/private/etc/apache2/ssl/localhost_mycompany_com.crt"
SSLCertificateKeyFile "/private/etc/apache2/ssl/mysslprivatekey.key"

Note – I tried skipping these 2 lines *in this file* since I use them later, but this caused https to fail, with this error in error.log:

Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile] (/private/etc/apache2/extra/httpd-ssl.conf:74)

6. Now in your main configuration area (in my case  /etc/apache/other/mysites.conf), add the following:

#! without this line, https requests were going to the wrong DocumentRoot (from the default VirtualHost). You could also put this line in httpd-ssl.conf presumably

NameVirtualHost *:443

#This is the main configuration block to add 

#remove the space after the start and end VirtualHost tags. removed for tumblr's sake.
<VirtualHost *:443> SSLEngine on ServerName localhost.mycompany.com #next 2 lines are dupes from http-ssl.conf, but https fails without them with connection refused SSLCertificateFile "/private/etc/apache2/ssl/localhost_gigabody_com.crt" SSLCertificateKeyFile "/private/etc/apache2/ssl/mysslprivatekey.key </VirtualHost>

7. Restart Apache

sudo /usr/sbin/apachectl restart

8. Open your browser and test for your particular domain

The first time you hit https, you’ll get an error that the certificate is untrusted, ignore it and allow the browser to use it, since you know it’s safe. These 2 urls should  both work and bring up the same default page, when everything is working.

9. For general troubleshooting, you can check out the Apache error log. I see these warnings in my error.log, but they are safely ignored in development. 

[warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?)
[warn] RSA server certificate CommonName (CN) `localhost.mycompany.com' does NOT match server name!?

Setting up SSL on Apache (with Ubuntu12 + AWS)

Today I added SSL to my Apache webserver, running on Ubuntu 12, on an AWS instance. This was the first time I’d ever worked with SSL or certificates and it was fairly straightforward though it seemed daunting at first. Ran into a few problems that the Internet didn’t solve for me, so I thought I’d share.

My sequence end to end:

1. When I bought my domain name through Namecheap, it came with an SSL certificate, which I had never activated. Namecheap apparently subcontracts SSL services to a company called Comodo.  You’ll presumably need to purchase a trusted certificate from an authority like Comodo or DigiCert if you don’t already have one for your production site.

2. In preparation for using SSL, I added HTTPS (port 443) to my EC2 Security Group to allow traffic through the firewall. This is in the AWS Management Console ( EC2 -> Security Groups in left nav -> select the group in use by your server instance, click Inbound tab, HTTPS is listed in the dropdown of pre-configured rules you can add).  Here’s what it looks like after you add it:

 image

3. I followed the instructions here to generate a private key (myprivatekey.key) and csr file (myserver.csr), saving them to a special directory for safekeeping. Basically this consisted of running:

openssl req -nodes -newkey rsa:2048 -keyout myprivatekey.key -out myserver.csr

Note that I’m using Apache 2 with mod_ssl. Instructions for other OS/webserver configurations are here . 

4. I submitted my generated csr to Namecheap through their web form, clicked on an approval email they sent, then received my certificate files by email from Comodo. They sent me a zip containing 2 files:

  • www_myserver_com.crt 
  • www_myserver_com.ca-bundle

5. Next I roughly followed the instructions here to setup the SSL files in production. For other OS/webserver configs, you can look here. For my setup, this consisted of

  • a. uploading myprivatekey.key file and the zip with the certificate files to AWS, then unzipping the certificate files
  • b. copying the private key under /etc/ssl/private
  • c. copying the 2 unzipped certificate files under /etc/ssl/certs

6.  Rather than mucking with Apache’s default config files, I typically load my own Apache .conf file that lives in: /etc/apache2/conf.d/mydomain.conf . To enable SSL, I edited mydomain.conf file, adding the block below.

#remove the space after the < brackets in the Virtual Host open/close tags. 
# Tumblr forces me to add it. <VirtualHost *:443> SSLEngine on ServerName myserver.com SSLCertificateKeyFile /etc/ssl/private/mysslprivatekey.key SSLCertificateFile /etc/ssl/certs/www_myserver_com.crt SSLCertificateChainFile /etc/ssl/certs/www_myserver_com.ca-bundle </VirtualHost>

I already had entries for port 80, so iphone 5 reconditionné I just had to add port 443. The Document Root and LogFile locations were inherited from elsewhere in my config, which was fine for my purposes. 

7. Enabled ssl for Apache by symlinking the available module under the enabled modules directory, then restarted apache:

 $ pushd /etc/apache2/mods-enabled/
 $ sudo ln -s ../mods-available/ssl.conf ssl.conf
 $ sudo ln -s ../mods-available/ssl.load ssl.load
$ sudo /usr/sbin/apachectl restart

8. restarted apache and tailed my access.log and error.log to check for problems.

tail -n30 /var/log/apache2/access.log
tail -n30 /var/log/apache2/error.log

Note that I had originally included the following lines in my .conf file (without the # signs to comment them out), but they caused problems.

#NameVirtualHost *:80
#NameVirtualHost *:443
#Listen *:80
#Listen *:443

I removed these lines because they broke Apache restart and yielded these errors:

[Wed Jun 26 23:04:57 2013] [warn] NameVirtualHost *:443 has no VirtualHosts
[Wed Jun 26 23:04:57 2013] [warn] NameVirtualHost *:80 has no VirtualHosts
(98)Address already in use: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down

Basically, because I left Apache’s default configuration in place and was using a supplementary conf file, my lines were duplicates of lines in Apache’s /etc/apache2/ports.conf file.  You’ll need the Listen port lines somewhere in your Apache configuration to get things working properly, but not loaded twice if you want everything to start! 

I still see this additional warning in my error.log at startup, but safely disregard it since it does not impact functionality. I leave my instance hostname as AWS has configured it.

RSA server certificate CommonName (CN) `www.myserver.com' does NOT match server name!?

9. To verify everything was working properly with my SSL certificate, I first ran a check of my website’s certificate configuration here: http://www.digicert.com/help/

10. I then doublechecked that both of these urls worked for my server, and that my apache access.log showed requests with port 80 and port 443 in use respectively. 

http://www.myservernamehere.com  

https://www.myservernamehere.com

 

Next up: getting an SSL certificate to work in my dev environment, not nearly so straightforward it turns out. 

This week in Startup Engineering: Decoupling Django DB & Web logic

In theory it sounded very straightforward, a simple refactoring.  

My goal: Separate the django database logic from the web/UI/business logic code. Out of the box, django worked like a charm, an all-in-one stack that ran very efficiently for a web/db prototype website on Amazon Web Services.

But in order to support future scalability, I needed to decouple these components, so they could live on the same or different servers transparently, and communicate completely through service APIs, a la the infamous Steve Yegge rant touting Jeff Bezos’s all-services-all-the-time mandate.

Things started simply enough, reviewing the existing views and models, figuring out what type of generic APIs I would need for a decoupled world.  Then it hit me. To separate user data from the web server meant an entirely new level of authentication and security would be needed between the database and the web servers. User-specific data would now make lots of back and forth trips across the network, and would need protection. Unlike my sheltered days coding at Ask.com, I no longer have a team of brilliant network and system administrators dedicated to solving exactly these problems: masking networks, enabling access and authorization, setting up virtual clouds.  

Time for another crash course in bootstrapped engineering. 

Note: I’m also enrolled in Secure Recurring Payments 101, Amazon AutoScaling Architecture 206, and of course, the toughest one for an introvert, Business Development 342a. Thankfully I’m coming off recent successes completing my studies in Video Security and Adaptive Bitrate Tuning, as well as Fitness Video Production 101 and 102.

So that’s where I am now.  Reviewing my options for django/apache authentication methods and frameworks, SSL certificates, and the like.  Will report back when I get to the midterm (later this week!), or find a study buddy to give me a headstart.