Web Applications Archive

Revisiting the POGO-E02 for ownCloud and storage

Posted October 4, 2013 By Landis V

In a recent post, I reconfigured one of my POGO-E02 devices to run Linux (also in a much older post, but I intend to go back and revisit that device with the updated process at some point).  I have reccently replaced my laptop (OfficeMax had a pretty impressive flash sale on a Gateway with an AMD quad core, and huge kudos to the OfficeMax in Kearney, NE for taking care of the customer incredibly well when the one I initially picked up had a dead left mouse button and a dead F10 key out of the box… literally zero hassle getting it replaced, great job to that team).  Unfortunately the old laptop is still sitting on the desk next to the new one, soaking up space I don’t have.  I need to make sure I have everything I’ve done on it in the past year and a half or so saved off and duplicate backup copies, as there are a couple of videos I’ve stored on there that are simply irreplaceable.

Time to get back into storage.  I like my Dropbox (you should definitely sign up here if you don’t already have an account) and I recently got some additional temporary space when I bought my Samsung Galaxy S3, but my space is otherwise full and I won’t pay for more.  I think ownCloud, an older version of which I reference briefly in the second linked post above, will prove quite suitable for this, especially in conjunction with my personal CA.

I brought my Linux-ified Pogo back online and hooked up a Seagate external drive I’ve had sitting around and been meaning to hook up for quite a while.  I hooked it up to the Pogo unit and it was recognized as sdb.  From there, I performed the following (as root) to create two, 1TB partions on the drive:

apt-get install parted
#confirm installation when prompted
parted -a optimal /dev/sdb
rm 2
rm 1
mkpart
#Answer prompts with label=OwnCloud, type=ext4, start=0%, end=50%
mkpart
#Answer prompts with label=Storage, type=ext4, start=50%, end=100%
quit
mkfs.ext4 /dev/sdb1
#I only created the OwnCloud partition at this time in the interest of saving time.
#Perform a similar command for sdb2
mkdir /mnt/owncloud
#Should edit /etc/fstab and add a line similar to the following for perpetuity:
#/dev/sdb1 /mnt/owncloud ext4 rw,relatime,data=ordered 0 0
#I did, and then rebooted to make sure everything came up/remounted as expected

From there, I proceeded with the installation of ownCloud on Debian as per the instructions on the site.  I followed the link for Debian Linux, verified my version (cat /etc/debian_version) – I am on 7.1 so followed the 7.0 instructions with a few verifications.

echo 'deb http://download.opensuse.org/repositories/isv:ownCloud:community/Debian_7.0/ /' >> /etc/apt/sources.list.d/owncloud.list
apt-get update
#Received the following error here:
#W: GPG error: http://download.opensuse.org Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 977C43A8BA684223
#Added the following steps, per http://en.kioskea.net/faq/809-debian-apt-get-no-pubkey-gpg-error
gpg --keyserver pgpkeys.mit.edu --recv-key 977C43A8BA684223
gpg -a --export 977C43A8BA684223 | apt-key add -
#Reran update to verify
apt-get update
#Success! I then noticed they had an equivalent answer in the next step :/ Oh well.
apt-get install owncloud
#Accept dependencies

Once installation completed, I opened a browser to the owncloud URL on the Pogo at http://<pogo_ip>/owncloud.  Because I intend to make my access seamless whether I’m at home or connecting from the Internet at large, I have a few tricks I need to do yet.  But first I finished up the basic configuration.  I created my administrator account and password, hit the Advanced button and set my data directory to the /mnt/owncloud/data directory using the mountpoint I had created previously.  The database was set at SQLite, so I left it for the time being.  After submitting, I thought to check and found that there was an existing “data” directory in /var/www/owncloud with ownership www-data:www-data and ug+rwx, so I created a /mnt/owncloud/data directory with the same ownership and permissions.  After refreshing the initial page and setting the final config again, things seemed to load properly this time around.

I set up a WebDAV connection from the Dolphin file browser and started transferring some of the files from the Linux laptop per the user manual.  I experienced problems transferring a few of these files and remembered something about large file support, so I went back and took a look into that.  The default max size looks to be 800MB and I didn’t have anything beyond that, so I tried a reboot; it took a long time to complete, but it seemed to work.  I backed up the remaining files I had planned on, shut down the old laptop, and was able to get it out of the way.

I downloaded the Windows client on the new laptop and installed it.  When installation was complete, I took a moment to pause and reconfigure a few things.  As I plan to partially expose the ownCloud instance to the Internet and have it accessible from my smartphone, one of my first goals/low hanging fruit items for protecting it from a large chunk of port scans, hacks, and exploits is to change the web service to run on a non-default port number.  I could simply do this externally, but then I would have to change my client settings whenever I was on my wifi connection.  So instead I referenced changing the port number in Apache.  I’ve done this a few times in the past, but I decided I would like to also maintain the ability to serve some pages on the standard HTTP port if I so decided later.  I edited /etc/apache2/ports.conf and added an entry for my non-SSL port directly below the “Listen 80” directive, and for my SSL ports within the <IfModule mod_ssl.c> and <IfModule mod_gnutls.c> sections below the “Listen 443” directives in each of those sections.  Rather than edit the “default” site file, I created a new site file at /etc/apache2/sites-available/owncloud, similar to the following.  The port numbers and DNS names referenced are purposefully invalid as I’m not trying to create a honeypot; modify them appropriately for your configuration.

        #unfortunately this isn't pasting correctly, so parts are missing.
        #Too late in the day for me to fix right now.
        ServerName owncloud.example.com
        DocumentRoot /var/www/owncloud
        ServerAdmin me@example.com

                Options Indexes FollowSymLinks MultiViews
                AllowOverride None
                Order allow,deny
                allow from all

        #ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
        #<Directory "/usr/lib/cgi-bin">
                #AllowOverride None
                #Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                #Order allow,deny
                #Allow from all
        #

        ErrorLog ${APACHE_LOG_DIR}/ownclouderror.log

        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn

In the future I will also be creating an owncloud-ssl file in the same directory. This will definitely happen before I configure any phone synchronization over the public Internet.  After restarting the apache service, I was able to access the ownCloud instance via http://owncloud_ip:newport#.  I have a bit of DNS work to do on my router to make the naming work universally, from the Internet or my LAN.  May document it up if there is any interest.  I would have to say this install was pretty slick and that I’m likely to pay for the client for my Android.

1 Comment. Join the Conversation

Adding search functionality

Posted September 19, 2013 By landisv1

I finally got around to adding search functionality to this blog, for whatever it’s worth.  I think my categorization makes it fairly simple, but there are probably a few benefits from having it on the site – so hopefully you find it helpful.  I ended up using the method documented here and changing my permalink settings… which I hope I don’t end up regretting.  Definitely still have some clean-up to do, but it’s a start and got me the information I needed at least.

Be the first to comment

Your own Dynamic DNS in 3 steps | The Nexus

Posted September 16, 2013 By Landis V

http://nexus.zteo.com/blog/your-own-dynamic-dns-in-3-steps/

Interesting idea, perhaps provides a little bit of added flexibility (and reduced cost) compared to Dyn, though I don’t have any complaints about Dyn at this point.  Also review the DDNS and TSIG articles on Wikipedia as well as RFC2136.  Still thinking that perhaps a hybrid HTTPS method; would allow any web server one had available to potentially be able to serve as a receiver for the updates as long as it had at least enough outbound access to give the information to the BIND server, allowing for possibly a bit more obscurity in the update path; also wouldn’t require that whatever host received the update information not have BIND running (i.e., perhaps a web host) as long as it could establish an outbound communication to the BIND server – or write to some file that the BIND server could retrieve.

Interesting question therefrom – short of a VPS, is it possible to use client-side certificates to identify a client to a webserver?  How about self-signed client certificates?

Be the first to comment

Multi-IR remote

Posted September 1, 2013 By Landis V

This, possibly in combination with this (or at least the software component thereof), might be a simpler solution to something I was thinking about today.  I have an XBMC remote control application on my phone and tablets which I like for several reasons.  It’s always handy and never lost – or at least easily found – are among the top of these.  This got me thinking about running an IP-to-IR remote so I could leave an IR transmitter in some fixed location always pointed at remote controlled electronic devices, and just use my always handy Android to manage channels on all the devices.  As an added bonus, the webmote software might provide an option to integrate all the remotes into one single control interface rather than needing different remotes for TV, DVD player, surround sound, and XBMC.

My original thought was just to figure out a way to integrate a USB port on an existing remote control and basically trigger the sending of the commands, but this would probably be more universal and likely much less thought and work intensive.

I found ATmega 88’s here for around $3 each, but haven’t reviewed the full parts list yet.

Be the first to comment

Gallery, OpenSSL CA, and TrueCrypt volume

Posted July 31, 2013 By Landis V

I’ve been thinking about, and even making attempts at, getting Gallery set up within my home network to allow automatic uploading from my Android devices.  There are a few things I wanted to do as part of this post, and since I’m getting around to doing it at 11:00 at night, it’s going to be more of a notes-style document than a well-structured post.

First, my goal.  I want to be able to automatically upload images taken on my Android devices to a “private cloud”… also known as a computer (VirtualBox VM, actually) on my network running the Gallery software.  I want this to be at least as simple as it currently is (I currently use Dropbox with instant upload enabled on both my wife’s and my phones).  I want our uploads and the account information associated with them to be secure, both on my private network and particularly across the Internet if I decide to go that route.  I have been thinking about setting up a trusted certificate authority for some time.  It’s an opportunity to learn and practice with software I don’t use on a daily basis.  I have an innate distrust for the cloud.  And because I think I can and I want to confirm.

Why am I doing this?  Dropbox works fairly well, and it’s nice to have the pictures replicated to both the cloud and our PCs… but we have exhausted our space (if you don’t already have Dropbox and would like to sign up via the link above, that will get me another 500MB 🙂 ).  While there are a couple of workarounds for this such as moving all the current photos out of my upload folder, there are some things I gain from Gallery that I’ve been wanting.  I may make an effort at integrating the Dropbox functionality alongside Gallery at some point down the road, but there are several steps to get there first. My wife also typically turns off automatic upload and replicates images manually.  I think she told me why, but I’ve forgotten; I’d like to get this set back to an automatic function so it’s not something she needs to remember to do.

Here’s a list of the parts involved in my venture, and a short explanation of why:

  • My Asus RT-N16 router with Tomato firmware
    • Provides internal DNS service and external-to-internal firewall access and port mapping
  • Android
    • Both our phones are rooted US Cellular (awesome carrier… comment if you’re considering changing carriers and interested in hearing more about them) Samsung Galaxy series Android devices.  I don’t have a great deal of love for iOS.
  • OpenSSL
    • I’ve wanted to configure a CA (certificate authority) to provide clean SSL service for a few of my internal web services for a while.  This seemed like a good opportunity to do so, especially when I considered potential public access to my Gallery server.
  • TrueCrypt
    • I’m planning to store my CA files in a volume that is both encrypted with TrueCrypt as well as offline except when I need to sign a certificate.
  • ReGalAndroid
    • Android client app to support automatic uploading
  • TurnKey Linux
    • Lightweight guest for Gallery VM.  I had almost forgotten that I had downloaded the pre-built Gallery appliance from them until I started this post.
  • Gallery
    • In addition to what looks to be a good gallery interface to my photos, I can configure automatic, private backup.  I gain the ability to tag and comment on photos, and lose the risk of having photos exist “in the cloud” by default.

This ends up being a moderately complex setup for a “typical” home environment.  Some might even say it’s excessive.  However, when assembled together, each of these pieces helps to reach the aforementioned goal.

I actually started down this path a little while ago and had a few of the foundational constructs in place.  VirtualBox was installed on my XP host system.  I had downloaded and configured a VM using the TurnKey appliance turnkey-gallery-12.1-squeeze-i386-vmdk and performed the basic setup for the TurnKey appliance.  I had installed the ReGalAndroid app on my SGSII and attempted setup with SSL, and discovered that a self-signed certificate on Android just wasn’t good enough.  Attempting to connect to the Gallery over my wifi connection via SSL yielded a “No peer certificate” error.  Not particularly surprising considering the SSL cert on the Gallery server was self signed.

So, at this point I essentially have two problems to address that more or less boil down to a single problem:  I need to create my own personal trusted CA (which has been near the end of my todo list for quite a while), use that CA to sign a certificate for the Gallery server (and preferably other internal web service servers), and trust certificates signed by my new private CA on our Android devices and internal computers.  I’ve made some effort at this previously as well.  I’m currently running PCLinuxOS on the laptop I typically use, and ended up annoyed about/failing to complete creation of a CA on that platform (especially in combination with my desire to store the CA in a TrueCrypt volume).

This evening I set out to create the OpenSSL CA on a well-established Ubuntu box on my network, largely based on the instructions at https://help.ubuntu.com/community/OpenSSL.  This didn’t get completed, largely due to documentation efforts and to searching for a window I thought I had open regarding trusting new CA’s on Android.  At this point, I need to spend some quality time with iptables on another project so I’ll try to pick back up from here later.  (This highlights another minor gripe/difficulty/annoyance I’ve had… serial posts in WordPress… there has to be a good way to do it, and some day I’ll probably need to take the time to do so.)  Once I finish creating the CA, my next steps will probably involve either creating a TrustManager on Android as documented here or importing a new (private) CA root certificate as documented here.

After creating the necessary certificates largely as indicated in the Ubuntu OpenSSL link, I copied the certificate and private key for the Apache server for my gallery page over to the Gallery appliance.  I then ran a2enmod ssl as root to enable mod_ssl on the Gallery server per instructions at https://help.ubuntu.com/10.04/serverguide/httpd.html, HTTPS Configuration section, and received a report back that mod_ssl was already enabled.  I then moved the certificate and private key to /etc/ssl/certs and /etc/ssl/private, respectively, ran a2ensite default-ssl, and modified the SSLCertificateFile and SSLCertificateKeyFile directives to point to the correct certificate and key.  Though I am still expecting to need to create a concatenated certificate chain with the CA certificate and the server certificate, I went ahead and tested a restart of the server to see what I got.

I received a certificate error/”problem with the certificate on this server” page in IE, much as I had expected – the client I was connecting from doesn’t even trust my new CA as a root yet.  I continued past the error, and noted that I will probably have to do a little work on the default-ssl site file in /etc/apache2/sites-available, as it doesn’t load directly to the gallery page; much of that should be able to be copied from the ‘default’ site file in the same directory, and I will play with that later.  I first wanted to see if I could get rid of the SSL error on the default Apache page, so I installed my new root CA public certificate as a trusted root CA on that box and retried loading the page.  SSL loaded cleanly, oorah!

So, to enable Gallery on HTTPS, I checked /etc/apache2/sites-available/default.  No reference there.  Turns out I needed to edit /etc/apache2/sites-enabled/gallery (they didn’t do a symlink as is done for the default sites).  I updated the VirtualHost section for *:443 to include the cert and key directives from the default-ssl config file, ran a2dissite default-ssl to disable the default-ssl site, and restarted Apache again.  I was then able to load the gallery site with HTTPS and no errors from that system.

I copied the public certificate for my root CA to my Dropbox, clicked on the .crt file in the Android dropbox app, and was able to install the certificate after setting an unlock mechanism.  Following that, I tested access from the ReGal Android app, but received a mismatch on the CN.  Apparently the ReGal app will not accept a match on an altName, and I will need to match the FQDN (which I will have to use for the connection so it will work both when I’m at home and when I’m on the public Internet).  Time to re-issue the certificate with the CN set to the FQDN and the hostname as a subject alt name.

After correcting and replacing the certificate and key, and revoking the old one for whatever that was worth, I encountered a new error with details “net.dahanne.gallery3.client.business.exceptions.G3ItemNotFoundException”.  This was fixed by enabling the REST module as indicated here.  Once that was done, I was able to establish a secure connection to the gallery!

Be the first to comment

http://commons.codeforamerica.org/

I was looking for something like this recently.   Need to explore it a little further.

Be the first to comment

I ran across these two articles recently, which reminded me to take a look at my account configuration.
http://mashable.com/2013/04/15/hackers-wordpress-blogs/ and, from there, http://ma.tt/2013/04/passwords-and-brute-force/.  Matt linked on to Kelly’s post with instructions on how to remove the admin account, which is straightforward and easy to follow.

I know about these things, and it’s something I should have done some time ago, but “things come up” 🙂  Having not gone through the procedure before, I did have a few questions which I experimented in order to answer.  First, I wanted to make sure “private” posts migrated properly to the new user – they do.  I was also going to check on drafts, but found it was easier to just clean up the drafts I had hanging out than to spend a lot of time messing with it.

I have a few questions that remain to be answered, but probably will be after the next WordPress update.  One additional step I would probably recommend goes just a little bit further in obscuring the name of the administrator account.  I created a separate “Author” account, assigned all previous posts to that account, and will make myself use it to the extent possible for content creation.  If nothing links to the admin account it should be just that much harder to locate, but I welcome more experienced and regular WordPress users comments on that subject.

Edit 5/1:  I made a small change from a “Contributor” to an “Author” account that will save me having to sign in as an admin in most cases.

Be the first to comment