Tuesday, September 9, 2014

Why are there ads on web sites?

On August 1st there was an article on ReadWrite.com that urged users to install AdBlock on their web browsers.  I think that is horrible.

It is an unspoken agreement between content provider and content consumer that the provider can attempt to make money from the consumer as long as it is not too obvious or painful for the consumer.  This balance between income for the provider and comfort of the consumer is essential to maintain.  Too much discomfort for the user and they will not visit, and too much comfort for the consumer and the provider will not make any money.

Extreme examples of this exist.  My kids are adept at visiting web pages which are nothing  but "clickholes" that provide malware and ads with the occasional funny video or picture.  These sites are evil and should be avoided.  But just because one site spews malware and popup ads does not mean another site with reasonable ads should be maligned.

There is nothing wrong with having ads on a web page.

The television and radio are full of ads, ad-free radio still includes station identification, self promotion, and other interruptions which might as well be ads.  Content is not free, people create content for a reason and should be rewarded for the time to create and provide that content.  Nothing is truly free, it costs someone something.

Ads are therefore inescapable and necessary.

Well, that is not true.  This page, this text, could be behind a paywall where only premium subscribers could read the content (otherwise all the data would be missing the vowels and you would have to pay for each a,e,i,o,u, and sometimes y.)  But I feel paywalls and subscriptions are best kept toward newspapers and porn.  They are poor solutions to the problem and prevent the exchange of ideas and information.  All the consumer hast to do is see an ad on a page (the horror!)

As long as ads are going to be on pages, they might as well reflect the interests of the consumer.  This is why you are tracked by Google.  Not to be spied upon but to be offered things advertisers hope you would like.  This might be creepy but I think it is far less creepy than the viagra, catheter, wheelchair, and adult diaper commercials on the evening news.  There is a market for such things but when I watch the news with my kids I have to explain not only geopolitical issues but also what erectile dysfunction is.  Targeted ads mean that you don't have to know about the latest in self-lubricating catheters, unless that is something interests you.

When you click on ads that interest you it only helps both you and the content provider. You will then see ads that more closely match ads that interest you and the content provider will make some money and be encouraged to make more content you like/need/want.  This is thanks to tracking and targeting of ads.

The tracking implemented by Google is a passive fingerprint of you which is shared across multiple web sites.  Your IP address, username on your computer, the type of browser and the configuration of that browser create this fingerprint.  Microsoft's jealousy sparked the "scroogled" campaign but rest assured that Bing tracks you too (that is why they have a rewards program, to encourage you to be tracked).  The tracking will happen no matter what, the technology is so simple that it is silly to not incorporate it into any web site.

So, ads and targeting are necessary for content providers to make money on the content they provide.  How much money are we talking here?  Shouldn't content providers not be so greedy and display less ads or have other options?  Well, the sad truth is that the viewing of an ad by a content consumer is literally a penny.  A click can be a dime, but most people don't click.  A popular post of mine generated over a hundred page views in a month, for that I was given $.04.  The total views of all of my pages over the last year might have resulted in a few dollars.

It is literally nothing for a content consumer to see an ad. But it means a lot more for a content provider.  Don't block ads, click on ads that truly interest you on sites that truly help you.  It is the same as putting a penny, nickel, or dime in a jar.


Wednesday, July 30, 2014

Railo 4.2.1.0 Install on Windows with Apache (not IIS) Problem

I will have a longer post soon on this (I hope).

If you are installing a new copy of Railo on a Windows server with Apache (not IIS) using the Railo installer which comes with tomcat, and at the end of the installation your web browser only comes up with a "not found" 404 error, then you might be having the same problem I was having.  The Railo service starts, but then stops (like the Railo service crashes with no errors).

If you look in the railo\tomcat\logs folder you will see the commons-daemon.yyy-mm-dd.log, in that you will see a few errors that look like: "Commons Daemon procrun failed with exit value: 4 (failed to run service)" and "The service process could not connect to the service controller."  These errors don't lead far, but the next place to look is the catalina.yyyy-mm-dd.log.

In there you will see errors like: "SEVERE: A child container failed during start" above that error is the issue, if you are using Apache (like you should on a Windows server) you don't have a "C:\inetpub\wwwroot" folder.

The installer defaults the server.xml to contain the following entry: "<context path="" docbase="C:\inetpub\wwwroot\">".  Simply edit the c:\railo\tomcat\conf\server.xml file with notepad and change the entry to read "docBase="C:\apache\htdocs\" instead.  Save the server.xml file and restart the railo service and you should be able to browse to http://127.0.0.1:8888/index.cfm to see the "Welcome to the Railo World" page.

To have your Apache web site in htdocs be able to handle ColdFusion pages, you need to make an entry in the server.xml file for the default web site on Apache.  To do this, add the following before the "</Engine>" tag toward the bottom:
<Host name="[your ip address or DNS name for Apache]" appbase="webapps" >
    <context docbase="C:\apache\htdocs\" path="" />
</Host>


Now you should be able to add a index.cfm page to your htdocs folder and have it display properly.

Sunday, April 27, 2014

Issues Connecting to OpenShift on a new system

Well, there were additional issues when connecting a new system.  Somehow the new system lacked permission to the .ssh folder in the user profile.  When "rhc setup" ran, it created the .ssh folder under root with no permission for the user.  After a quick "chown" so that the user had the proper permission, all was well. For all to consider, the error was as follows:
Cloning into 'foobar'...
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Friday, April 25, 2014

First Steps in OpenShift

OpenShift is a Platform as a Service (PaaS) provided by the Linux gurus at Red Hat.  It is like Amazon Web Services (AWS) but not as confusing for a newbie.  For free (truly free, no CC required or worries about utilization based billing) they give you three "small gears", or three servers with 512 MB of RAM and 1 GB of hard drive space each.  After you sign up for a free account, you can host "applications" on your "gears".  The "applications" can have additional components added by installing "cartridges".  So, I have hosted a PHP "application" and added a MySQL "cartridge" to one of my "gears".  ((I hate the lingo, sorry but this type of jargon drives me nuts and I only have it here to provide you with an orientation to the platform.))
After you sign up for that free account the documentation kind of tapers off and people who are super nerds tell you how easy things are without actually explaining anything.  That is why I am writing this, to provide some context and explanation which will hopefully let someone understand this great platform.
Here are two things that had me stuck once I got started:

1. Install the "rhc" tools.  The process is simple and well documented on ( https://www.openshift.com/get-started ), with more details on ( https://www.openshift.com/developers/rhc-client-tools-install ).  
These tools are important as the platform is tightly bound to local files on your computer using Git (rather than copying files with FTP to/from the server).  With the tools installed and a basic PHP application, you can now use Git to make a nice web page with basic PHP.  If you want to add MySQL, things are a bit more complex.

2. Adding MySQL is easy but connecting to it is a little... different.  OpenShift uses environment variables to connect to your database.  They give you the normal credentials just in case, but it is best to use the environment variables so that your PHP code does not contain sensitive information which could get your site hacked.  As I am rusty on PHP, it gave me a few hours of head scratching before I got things to work right.  But here is some code to save you a lot of time:
define('DB_HOST',getenv('OPENSHIFT_MYSQL_DB_HOST'));
define('DB_PORT',getenv('OPENSHIFT_MYSQL_DB_PORT'));
define('DB_USER',getenv('OPENSHIFT_MYSQL_DB_USERNAME'));
define('DB_PASS',getenv('OPENSHIFT_MYSQL_DB_PASSWORD'));
define('DB_NAME',getenv('OPENSHIFT_GEAR_NAME'));
$con = mysqli_connect(DB_HOST, DB_USER, DB_PASS, DB_NAME, DB_PORT);
This will set the "$con" variable as your database connection so that you can do select and insert queries.

A longer tutorial is in the works, but I hope this can get just one newbie up and going with OpenShift.

Tuesday, March 18, 2014

Offline/Airgapped Adobe ColdFusion Updates

Recently, I tweeted some stuff that got the goat of some prominent ColdFusion gurus.  I am not normally in a mood to attack the things that I love (and I love ColdFusion) but Adobe has been kicking that baby and stuff must be said.  ColdFusion is not secure by default, you should not just install it and hook that server to the Internet.  We, web developers/sysadmins/security people, have grown fat/dumb/happy with modern servers which are updated automagically and are (relatively) secure after install.  Adobe's track record with Flash and PDF should tell us differently, but we install CF and expect it to behave like Apache/PHP or IIS/ASP.NET.  It is not like them, you have to get into the time machine to 2003 and manually lock down the CF server and then manually apply the patches as soon as your Internet facing CF server picks them up.  The CF lockdown guide in all of it 58 pages of wisdom is available here:( http://wwwimages.adobe.com/www.adobe.com/content/dam/Adobe/en/products/coldfusion-enterprise/pdf/cf10-lockdown-guide.pdf )

But what about offline/airgapped servers?  ColdFusion 10 (and I assume 11) can only be updated online.  If you are so unlucky as to have an airgapped CF server then you have to spoof the Adobe update servers.  Here is how you can do that:

1. Download the Updates.xml file from: ( http://download.adobe.com/pub/adobe/coldfusion/xml/updates.xml ) and save it to a thumb drive.

2. Open the downloaded Updates.xml file with notepad (or Sublime Text) and search for the "cfhf_filename" tags.  Each of those is a hotfix you need to download and put onto the thumb drive.  Basically, if you scroll to the top, you will see that the latest hotfix is like "hotfix_013.jar".  You will then enter the url for it into the browser: ( http://download.adobe.com/pub/adobe/coldfusion/hotfix_013.jar ) and save the jar to your thumb drive.  If you know you need to implement older hotfixes, repeat with all of the hotfixes in the Updates.xml file.

3. You will now need to choose a location to host the updates.  Go to the server (don't copy the files yet) and create a folder accessible via a URL. I created a folder on my server which only was open as http://127.0.0.1 but there is no harm in putting the updates in your normal document root folder.  No matter where you will place the updates, they must be accessible via a URL (since this is a web server I will assume you know how to set up a folder on a web server and what the URL for that folder will be).  For me this was http://127.0.0.1/cfupdates

4. Don't copy the files yet. Now that you have the folder and know what the URL for that folder will be, edit the updates.xml file (in the editor of your choice) and edit the "cfhf_downloadlink" tag to be the full URL to each of the files once you copy them to the server.  For me this changed from "http://download.adobe.com/pub/adobe/coldfusion/hotfix_013.jar" to "http://127.0.0.1/cfupdate/hotfix_013.jar".  Save the updated updates.xml to the thumb drive and copy all the contents to the server folder you created.

5. Open the ColdFusion Administrator console and hit the Server Update tab, and then the Updates item under the main tree.  In the main dialog area, click the settings item in the top tabs.  Toward the bottom of the screen you will see the "Update Site" area, enter your update URL (to include the path to the updated updates.xml file).  Mine is "http://127.0.0.1/cfupdates/updates.xml".  Click Submit Changes.

6. You should then be able to click the "Available Updates" tab on the top, then "Check for Updates" and have the lower window populate with updates to be applied.

The updates appear to be cumulative (I have not fully tested them and the file sizes are confusing if they were cumulative -- they don't grow with each update).  Once I applied Hot Fix 13, the lower hot fixes disappeared from my dialog.  You also need to install the "Mandatory Update" for ColdFusion 10 ( http://helpx.adobe.com/coldfusion/kb/coldfusion-10-mandatory-update.html ) before attempting this process.  Yes you will need to do this for each new hotfix.  Maybe there is a way to manually execute the jar on the server so that you don't have to mess with this update URL mess.

Your mileage may vary, this is an Adobe product after all.  
It has been a long time since I have posted to this blog.  There are so many things I want to say and share but have not had the time to discuss them properly.  I guess I have been too busy.  While that is a good thing, it limits my ability to contribute in the only way I can, share what I have learned the hard way.  Since I rely upon people doing the same, sharing is something that I see as critical to our combined success.  So, please forgive my lack of contributions... and please continue sharing your knowledge and experiences!

Thursday, August 16, 2012

Centos Minimal Install with Railo 4

I have been using Centos Minimal as a basis for a project for a while
now. I like it because it is small and light and very basic. There
is a small attack surface to it as the only thing it does by default
is allow SSH connections to it. If you are a total Centos Minimal
newbie, it will throw you for a loop. The problem is that the network
is not configured by default and the tools to configure it are not
installed by default. It is like buying a car with the keys locked
inside of it.

Well, not really. You can edit the network configuration (
/etc/sysconfig/network-scripts/ifcfg-eth0 ). A further note to Centos
newbies, nano is not installed by default either so you need to use
vi. For as much my own reference as for anyone who might bother
reading this, here are the settings I normally put in the ifcfg file:

DEVICE=eth0
IPADDR=10.10.10.80
NETMASK=255.255.255.0
GATEWAY=10.10.10.1
DNS1=8.8.8.8
ONBOOT="yes"

This is for a static IP configuration. If you need DHCP then the
config file is more like this:

DEVICE=eth0
BOOTPROTO="dhcp"
HWADDR=00:0C:41:22:33:44
ONBOOT="yes"

Once you have edited the file, save it and restart networking. I
usually use the service command like this: "service network restart"

Now networking should hopefully be up, if you are in a VM like
Virtualbox, be sure to set the network interface mode properly -- in
my case I set it to bridged so that I can use real IPs from my
network. You can test by doing yum update to get the system up to
date. At this point I install my services and tools I need. At least
I get wget, apache httpd, and php with "yum install wget httpd php".

For my project I need Railo (http://www.getrailo.org). Installing
Railo has gotten so much easier with the version 4 beta. To grab
Railo I use: "wget
http://www.getrailo.org/down.cfm?item=/railo/remote/download/4.0.0.013/tomcat/linux/railo-4.0.0.013-BETA2-linux-installer.run"
or you can trust me and use "wget http://bit.ly/P0vi2g". Make the
installer executable with "chmod +x
railo-4.0.0.013-BETA2-linux-installer.run" and then run it
"./railo-4.0.0.013-BETA2-linux-installer.run"

The wizard will ask you questions about your apache installation,
usernames, and passwords for your configuration. The defaults are
more or less sufficient, it is a good idea to run services with their
own service account and not root.

If you were to test the installation at this point, you would be
disappointed to find that it will not work. The reason is the
firewall installed by default blocks everything except SSH. You will
need to add some rules for the firewall to allow connections. Here is
my basic set of commands to open the firewall for httd and Railo:

iptables -I INPUT 2 -p tcp --dport 80 -j ACCEPT
iptables -I INPUT 2 -p tcp --dport 8888 -j ACCEPT
service iptables save
service iptables restart

The 8888 is the Tomcat management port set during the wizard. If you
made a change to that port then be sure to open the proper port in the
firewall. Some online documentation says to use the iptables -A
command to append the chain "INPUT", the problem with that is that it
will insert your rules below the "deny all" rule. As we all want the
rules we add to work, I Insert them (iptables -I) as the second rule.
This is rather harmless as it will push each subsequent rule down.

Before you mess with the iptables rules it might be wise to look them
over with "iptables -L -v" to be sure there are not important rules at
the top. When I set up firewall rules, if I am specifically blocking
something, I put that rule first and the last rule should be the "deny
all" rule. Say I am blocking a specific troublesome IP address, then
I would add the blocking rule to the first entry. This might be
"iptables -I INPUT 1 -s 211.144.68.163 -j DROP" or if I wanted to
block a troublesome network "iptables -I INPUT 1 -s 202.0.0.0/8 -j
DROP".

Good luck!

Friday, June 29, 2012

Adobe Flash is not dead, but it does not look good.

A while ago I blogged about Adobe's decision to "Open Source" Flex (
http://simple-webdesign.blogspot.com/2011/11/thanks-for-nothing-adobe.html
). The feeling at the time was that Adobe was teetering on the edge
of killing Flash. The adoption of other standards by Apple, Google
and even Microsoft served to shrink Adobe Flash's market. Then we
hear that Adobe is getting rid of the Android Flash plugin (
http://blogs.adobe.com/flashplayer/2012/06/flash-player-and-android-update.html
).

This is a good thing in the long run as Adobe has ruined Flash
entirely. This is impressive considering where Flash started, it was
the bane of dial-up users in the 90s and then a major security concern
of the 00s. Steve Jobs will be remembered for a lot of things, but I
think I will admire how he killed Flash by speaking the truth. It was
a brave thing to do, to say "Adobe has no clothes". They were part of
the complacent IT crowd that assumed technologies like Internet
Explorer, PDF, Desktops and Flash would be a part of our lives
forever. This is no longer the case.

Maybe this will let Adobe shrink and focus on what they do well,
making multimedia creation applications, and not things they never did
well such as drive web technologies. The Macromedia acquisition by
Adobe was a big, big mistake for everyone except the people who cashed
in on the stocks. Since then it has been a phenomenal loss. With the
death of Flash what is Adobe left with? They killed Freehand (it was
inferior to Illustrator anyhow). Adobe is left with Fireworks,
Dreamweaver and Coldfusion. What a mess. Adobe is primarily a
multimedia content creation company and they hold the *best*
closed-source web application server technology (meaning: better than
ASP as that is all that is left). I never liked Fireworks but on the
other hand I am a designer who can program so maybe I was not intended
to like it. I was Ok with Dreamweaver but it is hardly essential and
consider it strange that Flex/Flash Builder was based on the MUCH
better Eclipse, and not Dreamweaver. If they were going to charge
money for something why not leverage their own products. But that is
the dumb moves that got us to this point.

Adobe pushed Flash to be everything to everyone without considering
"should" they do something vs "can" they do something. They have the
same issue with PDF, so it comes as no surprise that I think PDF has
days that are numbered as well. All of this is too bad. Flash is
still a great animation platform, only for television and not the web.
For example the new Titmouse cartoon MotorCity (
http://peopleofmotorcity.tumblr.com/ ) is entirely Flash... or mostly
Flash as I assume a lot of the car/racing effects are 3d. Flash is an
important part of their workflow, a complete dissolution of Flash
would alter how they work. I like Flash for animation and drawing, it
is what I doodle in when I have the chance. The excellent Webcomics
of Humor Scientist Kris Straub ( http://krisstraub.com/ ) are done in
Flash.

I'd hope this only spells the doom of Flash the plug-in and not Flash
the vector animation and drawing package. I'd really hope for an open
source alternative to the drawing and animation functions of Flash but
that is because I'd like to use a new version of Flash on Linux
without Wine. If you know of such an open source application that is
as smooth as Flash for drawing and animating I'd love to hear about
it.

Wednesday, June 27, 2012

It has been 2 years, time for a XBOX Live problem

So 2 years ago this happened (
http://simple-webdesign.blogspot.com/2010/11/xbox-live-anguish.html ).
We had a big problem with XBox Live where Microsoft tried to charge
the automatic subscription fee to an expired card, when we bought a
gift card for our son to renew his subscription he couldn't because
the account was locked because they billed an expired card. Two years
ago, I sat on a phone for close to 8 hours for the opportunity to give
Microsoft money. It has been two years, the card the subscription
bills to is expired. Have they made things better?

No. Things are not better. The card expired and before the
subscription fee is due, we decided to update the card information
BEFORE it expires. Turns out you can't. The new card has the same
numbers as the old card with the exception of the expiration date and
the CVV2 code. For 10 hours my son and then wife tried to update the
card on the console. It would look like it took, but when you go back
into the billing area the number reverted to the expired card
information. I got home and did a few Google searches (I won't use
Bing!). Every XBox forum post link resulted in an ASPX error (nice
advertisement for the raw power of ASP!). Via Google cache, I was
able to get the real support page for XBox Live (
http://support.xbox.com/en-US/contact-us ) -- I was not able to find
that page otherwise, only a hell of pages that link to each other with
"trouble-shooting" information on them.

I tweeted to the XBox Live support account and got a few
back-and-forth tweets of moderate usefulness. The real help came in
the form of a agent on their chat system. The first time I went on
chat the page informed me that I would only have to wait about 3
minutes. I waited 45 minutes to get to number 2 in the queue only to
stay there another 35 minutes. I left and got back into chat and
spent another hour waiting for an agent. After going from 26th to
1st, I met Nichole (my second most favorite Microsoft employee,
EVER!). She had me do some things and in the end we had to cancel the
current subscription, she re-issued me "gift subscription" codes to
get back my remaining balance of time and then we had to resubscribe
to XBox Live with the gift accounts and then enter the new card.

Once again, I found myself putting in a solid day of work to give
Microsoft money. Am I out of line to think that, as a customer, they
should put in a day's work to GET my money? What success would a
business be if to pay your bill with them you had to wrestle
alligators and jump through flaming hoops just to hand them a check?

I think about our other console, the Wii. Internet access is free on
the Wii. It is integrated with Wi-Fi which is the way MOST people
access the Internet from devices in their homes. If you want to buy
stuff to play on the Wii from their store, you can buy points from the
console or Grandma can mail you a gift card. No subscriptions, no
adapters, no multi-layer accounts (Windows Live account -> XBox Live
Profile??) Nintendo does the work and you give them money. They made
it easy. Microsoft intentionally makes it harder than anyone else.
In the age of the App Store and Google Play Store, or even the Ubuntu
Software Center, users can buy and install programs very easily. Is
this usual XBox Live fiasco the way it will work with Windows 8? I
hope not!

Not that it matters, I'd love to throw the damn XBox into the sea
after this last set of issues. Check back with me in 2014, maybe they
will have fixed it by then? LOL!

Friday, June 22, 2012

Scan a network for Public and Private SNMP with Linux

This applies, in my case, to a Virtualbox VM running Backtrack 5r2. I
have a network I inherited. It has been a source of pain that few
could describe. Recent events had me curious, "How to I find out if I
have dumb SNMP configurations on my network?" Often devices come with
bad SNMP and other times people do dumb things, and sometimes there is
a calamitous combination of the two.

The tool of my choice to scan for public and private community strings
was Snmpwalk on Backtrack. I am sure there are other tools and it
might not be the perfect choice, thus my initial frustration at the
lack of documentation and my desire to create this post.

Snmpwalk is available for many linux distributions and offers a huge
array of capabilities. For a quick sample of snmpwalk commands you
can check Kioptrix (http://www.kioptrix.com/blog/?p=29). I went with
a very basic command as I was hoping to not get any results at all.
The command I went with was:

snmpwalk -c public -v1 targetIP

This worked great for a single IP address but I had a whole class c
network to scan. So it was time to use some bash to make this work.
I must confess I love Linux but have the most experience with Windows.
If you are like me then you might be interested in a way to scan a
whole network.

for i in {1..254}; do snmpwalk -c public -v1 192.168.10.$i >> snmp_scan_$i; done

This will scan all the IPs from 192.168.10.1 to 192.168.10.254 for
devices with SNMP configured with a community string of "public". You
can change this to scan for "private" or scan other IP ranges. I am
sure there is a better way to filter out the "No Response from .."
messages. But this worked for me and I wanted to give back to the
Internet.

If you found this at all helpful then please leave a comment!

Friday, April 13, 2012

Roll your own Software Installation GPOs

So, these days almost everyone has Active Directory (AD) implemented.
I even know a few people who run it at their homes (nothing I would
do). If you have AD and you are not using Group Policy Objects (GPOs)
to manage things then you are wasting your time. GPOs are the main
reason to put up with a Microsoft AD network. Since I am a designer,
who seems to have little time for design these days, I am no AD/GPO
guru but can cause some damage. I would like to pass along some of my
dangerous knowledge to you!

GPOs can do all kinds of things, but they are best at messing with the
registries of the computers on the domain. GPOs can also install
applications automatically on the domain computers. If you think that
process would be easy, you are sadly mistaken. The only way a GPO can
push out an application automatically is if the application is
packaged as an MSI. Some cool programs like 7-Zip and
Libre/OpenOffice have msi versions you can download and push out.
Other, cooler programs do not.

Here is how to make your own MSI files for pushing out applications
with software installation GPOs:

Requirements: 7-Zip, the 7-Zip 7z SFX Library
(http://www.7-zip.org/download.html), exe2msi
(http://www.qwertylab.com/), and Microsoft ORCA
(http://www.technipages.com/download-orca-msi-editor)

Step One: Understand the installation process of your program. You
want a silent install of the application. If there is an installation
wizard, you need to know how to script the install of the program. If
you can't do that then you are (mostly) SOL. In my example I want to
push out a program and schedule a task to run that program
periodically. To install the program I just need to copy a file to a
directory and then run SCHTASKS to schedule the task. I will use a
regular BAT file to script this process.

Step Two: Assemble the files. I usually make a directory that will
contain all the components I need to perform the installation. This
would be like setup.exe and any associated files. If you need to run
the installation program with command switches to make it run in a
scripted and silent way then you might want to call it from a BAT
file. The important thing to know at this point is what command is
needed to kick off the installation as it will need to be configured
in the self-extracting exe in the next step. Once all the files are
in the same directory, select them and right click to add them a 7z
file (scheduleProgram.7z in my case). The files for my program
consist of an exe file and the bat file used to make a directory, copy
the exe to the directory and the SCHTASKS string to schedule the task.

Step Three: Make the self-extracting exe file. Copy the new 7z file
containing the installation files and any scripts needed to perform
the silent install to a new folder containing the 7z SFX library
(7zS.sfx in my case). You will need to make a new text file called
config.txt and insert the following into it:

;!@Install@!UTF-8!
Title="Name Of Your Program"
RunProgram="install.bat"
;!@InstallEnd@!

You will need to edit the name of your program and change
"install.bat" to whatever is needed to install your program. In my
case it is install.bat as that creates a directory, copies the files I
need and then runs the SCHTASKS command. To automate the process a
bit, I usually make a bat file in this directory to run the command to
smush all these files together to make the self-extracting exe. You
can run the same command from a command prompt in the proper directory
or you can just make a "make.bat" in the folder with the following as
the contents:

copy /b 7zs.sfx + config.txt + scheduleProgram.7z scheduleProgram.exe

The "/b" is a binary copy and the "+"s combine the files together.
Copy copies the files and combines them into "scheduleProgram.exe".

This is a good time to test the new SFX executable, check the order of
the files in the copy command and the contents of the config.txt if
you have problems.

Step Four: After your SFX exe is tested and working it is time to turn
that into an MSI. There are many options out there, a free option is
the WIX (http://wix.sourceforge.net/) package. I am not familiar with
it and have had success with the free version of exe2msi. I am not
sure about the license or how the use of the free version is limited.
As the free version has problems from time to time, I assume the "pro"
version would have less issues to warrant the $299. Since the free
version is a decent product, if you have money in your budget and want
to support decent software consider buying the pro version.

Exe2msi is simple, run the exe2msi.exe program after installing it and
browse to the SFX Exe you created. Leave the arguments field blank
and just hit Build MSI. Once it is done then just close the exe2msi
application. Now the MSI is done and can be tested for installation.
If you can install the program as expected from the MSI you are ready
for the next step. If there are problems then double-check the SFX
Exe works properly and re-build the MSI.

Step Five: Test the MSI with ORCA. This will save a lot of time if
there are problems with the way the MSI was generated. Later, if you
notice the MSI fails to install via GPO but the Windows\temp folder on
the targeted computers is getting files like "MSI----.LOG" that look
like "1: 2905 2: C:\WINDOWS\sytem32\appmgmt\MACHINE..." then you need
to run ORCA.

When you run ORCA (which is an abandoned product from Microsoft to
exit MSI files and the databases they contain), you simply install
ORCA and right click any MSI file and hit "edit with ORCA". Once ORCA
opens the MSI file hit Tools, Validate. In the validation evaluation
file box leave it to read "Full MSI Validation Suite" and hit "Go".
When I was having problems I had the errors: "The
InstallExecuteSequence table does not contain the set of actions
(PublishFeatures, PublishProduct)" as well as "The PublishFeatures
action is required in the AdvtExecuteSequence table" and "The
PublishProduct action is required in the AdvtExecuteSequence table".
To fix those errors I added a row to the InstallExecuteSequence and
AdvtExecuteSequence tables for PublishFeatures (with a sequence of
"6300") and PublishProduct (with a sequence of "6400"). After any
required edits are done, save the MSI file.

Step Six: Make a share on the server to hold your installation files.
Share out the folder with permissions to "AUTHENTICATED USERS" and
"DOMAIN COMPUTERS" to have at least read and execute permissions on
the share and to the files themselves. The Software installation GPO
runs with the computer account and not the user account, those two
groups with read and execute permissions should allow the computer
account to run the installation MSI.

Step Seven: Open Group Policy Management and make a new GPO and link
it to the OU for the computers you wish to target. It is best to make
a test OU and move a machine you wish to test the GPO with into that
OU. In the Group Policy Object Editor, under the "Computer
Configuration", "software installation" area, hit New, package.
Browse through the network to the share you created in step five and
select the MSI you wish to install. Do not browse via the "my
computer" or any path that uses drive letters as the computer account
installing the MSI will not have access to those resources, only paths
that look like "\\computer\share\install.msi" will work. Make sure
the package with the MSI located in the proper path ("source" should
be like the noted path and not have any drive letters), make sure the
package is "assigned" and the GPO link is enabled to the OU where the
testing computer is placed in Active Directory.

Step Eight: Wait and reboot the computer or run GPUPDATE /FORCE on it
to force the GPO installation to start. Either one will make the
computer reboot, check the GPO and install the MSI as you configured.
If there is a problem and you need to re-test, you can later right
click on the package and hit "all tasks", "redeploy application" to
force it to be sent back out to the computers in the linked OUs.

If you have problems in step Seven where you can't edit the new GPO
because of some "path not found" error, right-click the new GPO and
hit "Back Up..." and back up the new GPO to some location on the
server. Then, right click the new GPO and hit "Restore from
Backup..." and restore the same GPO back. For some reason this was
necessary in my situation.

Software Installation by GPO is a typical Microsoft solution where the
promised benefit is almost outweighed by the efforts to implement what
should be a simple process. The lack of tools which should come with
Microsoft Windows Server make the situation almost impossible for the
casual network admin to implement GPO software installations.
Hopefully this can guide you though the process to actually implement
GPO software installations.

Sunday, December 25, 2011

Windows Vista asking "press the configuration button on the access point"?

WOW, reason 9,746 to move to Linux. My parents have a laptop with
Vista on it. Not a big deal, should do normal stuff and not require
much attention. I set them up with anti-virus and the UAC prevents
them from doing anything too silly without seeing the "oh noes!"
pop-up. I figured they were all set.

Well, come to find out, a well-intentioned person "helped" them with
their home network and computers (they have some apples too -- I wish
it was the fruit). The wireless network was WEP with the default SSID
and they were having problems adding a new laptop to the wireless. My
father didn't know what the passphrase was to the router and asked me
for some help resetting it. I easily hit the admin web page for the
router and guessed the password. I set the network to be WPA2-PSK and
gave it a nice passphrase they could remember, and proceeded to write
it down on the router for them. Tested the connection with my wife's
smart phone and the apple laptop.

Before I left my mother asked me to set the wireless network for her
laptop... her Vista laptop. (sense of dread yet? I should have felt
it). The kids were feeling tired and the wife wanted to get home to
get ready for Christmas morning. How hard could this be? The goofy
laptop detected the wireless network, I clicked to join it and the
crazy thing said "Press the configuration button on your wireless
access point."

It might as well have asked to pet my giraffe. WTF? At the bottom
there was a link to manually enter the PSK in case I could not find
the button.. thanks, I am an idiot and can't find a button? I entered
the PSK because pressing a button on a wireless router was dumb, silly
and for stupid people who waste their time. The problem is that then
Vista had a problem and couldn't connect to the wireless. Maybe I
typed the PSK in wrong.. multiple tries and it would not work! I had
looked at the router, it was a Netgear. (sense of dread yet? I should
have felt it).

Come to find out there is a crappy thing Netgear calls WPS or Wireless
Protected Setup. This "security" feature lets you press a button and
have the Vista device negotiate a WPA passphrase with the router.
Dumb! The problem is that WPA is crap and WPA2 is where the cool kids
hang out these days. So this crap will not work with Vista as it is
WPS-aware (although Microsoft is always blazing new trails of
stupidity and dumbness and calls it "Windows Connect Now").

Now I am looking at a Vista Home Basic laptop that needs Windows
Connect Now disabled. Google tells me that I can disable it with a
local Group Policy (yay!) but the Local Group Policy Editor does not
work on Vista Home Basic as GPOs are disabled on it (boo!). I found a
xlsx spreadsheet with mapping between GPOs and their registry keys
(yay!) but am not sure how to deciper the registry entries on the
spreadsheet (boo!).

They look like this:
HKLM/Software/Policies/Microsoft/WCN/Registrars!EnableRegistrars,
HKLM/Software/Policies/Microsoft/WCN/Registrars!DisableUPnPRegistrar,
HKLM/Software/Policies/Microsoft/WCN/Registrars!DisableInBand802DOT1Registrar,
HKLM/Software/Policies/Microsoft/WCN/Registrars!DisableFlashConfigRegistrar,
HKLM/Software/Policies/Microsoft/WCN/Registrars!DisableWPDRegistrar,
HKLM/Software/Policies/Microsoft/WCN/Registrars!MaxWCNDeviceNumber,
HKLM/Software/Policies/Microsoft/WCN/Registrars!HigherPrecedenceRegistrar

Who names this stuff? My main issue is what does the Exclamation
point mean? Are those end values binary values, empty string values?
Who knows. I also disabled the Windows Connect Now service and hope
that does the trick. We will see...

What crap... time for bed.

If you are a lost soul and wondered about this, please comment below..
Any fixes are appreciated!

Popular Posts