Subscribe in a reader

Recent Articles:

Accordion FAQ with Searching and Highlighting

Wednesday, March 2, 2011 1:02:11 AM

Recently I was working on a website redesign and one of the items that was specified in the requirements was a Frequently Asked Questions (FAQ). These are pretty run of the mill these days. In the design they showed a search box for the FAQ in addition to the full site-wide search. They also wanted collapsible (accordion) sections with collapsible questions within each section. Aside from the search, not a biggie.

Now, normally I would go back to the customer and seek some clarification, but this time I decided I would try something, just to see if I could do it. I didn't want to implement a server-side solution to the searching, since everything was on this page and already loaded. I figured there would be a way to accomplish all of my goals with a little bit of JavaScript and the handy jQuery library. As it turns out it was not as easy as I thought it would be, yet the amount of code that it required was still incredibly minimal.

The FAQ structure

The structure of the FAQ that I was provided was pretty basic, nothing fancy or out of the ordinary to see here.

<div>
	<label for="FAQSearch">Frequently Asked Questions Search</label> 
	<input type="text" name="FAQSearch" id="FAQSearch" />
	<input type="submit" id="SearchFAQ" value="search" />
</div>

<div>
	<a id="ExpandAll" href="javascript:void(0);">Expand All</a> / 
	<a id="CollapseAll" href="javascript:void(0);">Collapse All</a>
</div>

<div id="FAQ">
	<h3 class="Topic">Topic 1</h3>
	<div class="TopicContents" style="display: none;">
		<h4 class="Question">Question #1-1</h4>
		<div class="Answer" style="display: none;">Answer #1-1</div>
		<h4 class="Question">Question #1-2</h4>
		<div class="Answer" style="display: none;">Answer #1-2</div>
		<h4 class="Question">Question #1-3</h4>
		<div class="Answer" style="display: none;">Answer #1-3</div>
		<h4 class="Question">Question #1-4</h4>
		<div class="Answer" style="display: none;">Answer #1-4</div>
		<h4 class="Question">Question #1-5</h4>
		<div class="Answer" style="display: none;">Answer #1-5</div>
		<h4 class="Question">Question #1-6</h4>
		<div class="Answer" style="display: none;">Answer #1-6</div>
	</div>
	<h3 class="Topic">Topic 2</h3>
	<div class="TopicContents" style="display: none;">
		<h4 class="Question">Question #2-1</h4>
		<div class="Answer" style="display: none;">Answer #2-1</div>
		<h4 class="Question">Question #2-2</h4>
		<div class="Answer" style="display: none;">Answer #2-2</div>
		<h4 class="Question">Question #2-3</h4>
		<div class="Answer" style="display: none;">Answer #2-3</div>
		<h4 class="Question">Question #2-4</h4>
		<div class="Answer" style="display: none;">Answer #2-4</div>
		<h4 class="Question">Question #2-5</h4>
		<div class="Answer" style="display: none;">Answer #2-5</div>
	</div>
</div>

Like I said, pretty run of the mill stuff. But what this lacks is functionality to actually perform the searching and expanding. Lets take a look at the JavaScript to see what I did.

The script that makes it all happen

<script>
	$('h3.Topic').click(function () {
		$(this).next().toggle(300);
	});
	$('h4.Question').click(function () {
		$(this).next().toggle(300);
	});
	$('#ExpandAll').click(function () {
		$('#FAQ').children('div.TopicContents').show(300).children('div.Answer').show(300);
	});
	$('#CollapseAll').click(function () {
		$('#FAQ').children('div.TopicContents').hide(300).children('div.Answer').hide();
	});
	jQuery.expr[':'].Contains = function (a, i, m) {
		return jQuery(a).text().toUpperCase().indexOf(m[3].toUpperCase()) >= 0;
	};
	$('#SearchFAQ').click(function () {
		$('#FAQ').children('div.TopicContents').hide().children('div.Answer').hide();
		if ($('#FAQSearch').val() != '') {
			$('div.Answer:Contains(' + $('#FAQSearch').val().toUpperCase() + ')').show().parent().show(300);
			try {
				$('.highlight').removeClass("highlight");
				$('div.Answer:Contains(' + $('#FAQSearch').val().toUpperCase() + ')').each(function () {
					$(this).html(
						$(this).html().replace(
							new RegExp($('#FAQSearch').val(), "ig"), 
							function(match) {
								return '<span class="highlight">' + match + '</span>';
							}
						)
					)
				});
			}
			catch (err) {
			}
		}
		return false;
	});
</script>

Lets break this up into individual functions.

The Accordion Effect

	$('h3.Topic').click(function () {
		$(this).next().toggle(300);
	});

This should be pretty straight forward. When the user clicks on a Topic it toggles the visibility of the next tag, which should always be a div with class TopicContents.

	$('h4.Question').click(function () {
		$(this).next().toggle(300);
	});

This is almost identical to the code before, except this toggles the element following an h4 with class Question. This should be a div with class Answer.

	$('#ExpandAll').click(function () {
		$('#FAQ').children('div.TopicContents').show(300).children('div.Answer').show(300);
	});

When the user clicks the Expand All link at the top this will set all divs with the class TopicContents to visible and then set all child divs with class Answer to visible.

	$('#CollapseAll').click(function () {
		$('#FAQ').children('div.TopicContents').hide(300).children('div.Answer').hide();
	});

Much like the previous item but in reverse. This will set the visibility of all divs with class of TopicContents or Answer to hidden.

Searching

	jQuery.expr[':'].Contains = function (a, i, m) {
		return jQuery(a).text().toUpperCase().indexOf(m[3].toUpperCase()) >= 0;
	};

This was a major piece of the magic that made the searching work. This adds a special selector, :Contains, that will match with case insensitivity. It does this by converting the text to search to upper case and the text you are searching for to upper case and then performs the actual comparison. I did not write this but do not recall which site I had pulled this off of. I believe it was on the jQuery forums or Stack Overflow.

	$('#SearchFAQ').click(function () {
		$('#FAQ').children('div.TopicContents').hide().children('div.Answer').hide();
		if ($('#FAQSearch').val() != '') {
			try {
				$('.highlight').removeClass("highlight");
				$('div.Answer:Contains(' + $('#FAQSearch').val().toUpperCase() + ')').each(function () {
					$(this).show().parent().show(300);
					$(this).html(
						$(this).html().replace(
							new RegExp($('#FAQSearch').val(), "ig"), 
							function(match) {
								return '' + match + '';
							}
						)
					)
				});
			}
			catch (err) {
				alert(err);
			}
		}
		return false;
	});

This is the entire searching function. Lets go through it line by line:

  • When the user clicks the search button
    • Collapse all Topics and Questions
    • If the user actually typed something into the search box
      • Try the next block of code (so we can handle any errors gracefully)
        • Remove all highlighted terms from previous searches
        • Using the previously mentioned selector, look for the term typed in the search box in all divs with class Answer and execute the following code
          • Set the answer and parent topic to visible
          • change the Answer's HTML contents to:
            • Perform a search and replace on the Answer's HTML
              • Create a new case insenstive regular expression with the search term
              • Perform the following when it finds the search term
                • Wrap the search term in a span with class highlight
      • Catch any errors
        • Show an alert with the error - for debugging purposes
    • return false so that it does not cause a page post-back/form submission.

In my css file I just set .highlight to 'background-color: yellow;' as a quick demo.

Hope you enjoyed this and learned something from it. It was quite an adventure getting all of the pieces to line up properly. The hardest parts to get working, mostly due to my lack of experience with JavaScript/jQuery, was the Regular Expression setup and the proper order to perform the HTML substitution. Even when it ran without errors it would not change any of the text. May my battle be your benefit.



View the demo


Dynamically Access Subclass Properties in PHP

Monday, December 6, 2010 2:27:04 AM

The other night I was working on some template rendering code and wanted to handle classes with properties that were classes with properties

An example:

class Class1 {
    public $Prop1 = "Property 1";
    public $Prop2;
    public function __construct()
    {
       $Prop2 = new Class2();
    }
} 
class Class2 {
    public $SubProp1 = "Sub Property 1";
}

In my template I wanted to include tokens, such as {{Prop1}} and {{Prop2->SubProp1}} and then pass an instance of Class1 to the template processor, with the template and have it replace the tokens with the appropriate properties.

This is all fine and dandy for the main properties, but when you start working with the inner class properties I couldn't get it to work and could not find any shortcut methods online. Enter my new function (Part of my DAO class).

 
public static function GetPropertyValue($object, $dataitem)
{
   if (strstr($object, "->"))
   {
      $parts = explode("->", $object);
      $sub = $dataitem->$parts[0];
      $subobj = "";
      for ($i = 1; $i < count($parts); $i++)
      {
         if ($i > 1)
         {
            $subobj .= "->";
         }
         $subobj .= $parts[$i];
      } 
      return DAO::GetPropertyValue($subobj, $sub);
   } else { 
      return $dataitem->$object;
   }
}

Now all I have to do is load my template, retrieve a list of tokens and for each token perform a $template = str_replace("{{" . $token . "}}", DAO::GetPropertyValue($token, $classinstance), $template);

If you know of a way to dynamically retrieve this subproperty I would love to know about it, otherwise, feel free to use this code in any of your projects (commercial or open source). If you do use it, drop me a line and let me know.


HTC Droid Incredible 2.2 Bug Killing Exchange Services

Monday, September 20, 2010 10:59:31 AM

It looks like there is a bug in the 2.2 firmware for the HTC Droid Incredible that will fill up your exchange logs and add overhead to your HTTP logs.

We saw our HTTP logs increase from ~20MB/day to over 100MB/day. On our message store logs we saw an increase from 1-2 log files, 5MB each, every minute to over 20.

Shutting off the phones saw an immediate reduction in the logfile creation.

An example of one of our users showed a drastic increase in HTTP log sessions, as follows:

  • 9/07 – 803 entries
  • 9/08 – 2768 entries
  • 9/09 – 408414 entries
  • 9/10 – 428765 entries

Both users are using the same phone and are on Verizon.

One user is also experiencing a calendar issue where multiple calendar items are appearing even after deletion and pages of reminders from the past keep coming up.

There are lots of forum entries around the web on this, but no fixes that I've been able to locate. Hopefully HTC/Verizon will have a fix out soon. Until then we are waiting for the users to bring their phones in so we can perform some diagnostics and testing on them.

If you know of any good fixes or are having this issue, drop a line in the comments. The best solution we have for now is to disable OMA or Mobile Network on the device and wait for a fix or install a 3rd party e-mail application.


My Recent SAN Migration

Tuesday, September 7, 2010 2:55:48 PM

SAN Migration Summary

Environment 1:

  • 1 XIOTech Magnitude 3D 3000 (Old)
  • Cisco 9100 SAN Fabric (Old)
  • 2 XIOTech Emprise 5000 (New)
  • Cisco 9124 SAN Fabric (New)
  • 12 TB of Data
  • 34 SAN Volumes
  • 2 SAN Connected Physical Windows Servers (MPIO)
    • File Servers running VSS and DFS/DFS-R
  • 1 Windows Server Added to New SAN (MPIO)
    • Exchange Server 2003
  • 3 vSphere 4.0.0-267974 Enterprise Nodes
    • 24 Virtual Servers
      • 2 SQL Servers
        • 1 SQL Server has 3 RDM Volumes
      • 2 Web Servers
      • 1 Domain Controller
      • 1 Print Server
      • 18 App/Test Servers
    • 5 SAN Connected VMFS Volumes (MPIO)
    • 3 SAN Connected RDM Volumes (MPIO)

Environment 2:

  • 1 XIOTech Magnitude 3D 3000 (Old)
  • 1 XIOTech Emprise 5000 (New)
  • Cisco 9124 SAN Fabric (New)
  • 6.6 TB of Data
  • 23 SAN Volumes
  • 3 SAN Connected Physical Windows Servers (MPIO)
    • 2 File Servers running VSS and DFS/DFS-R
    • 1 SQL 2003 Server
  • 3 Independent ESXi 4.0.0-171294 Nodes
    • 14 Virtual Servers
      • 1 SQL Server with 1 RDM Volumes
      • 2 Web Servers
        • 1 Web Server has 2 RDM Volumes
      • 1 SCCM Server with 1 RDM Volumes
      • 1 File Server with 1 RDM Volumes
      • 9 App/Test Servers
    • 3 SAN Connected VMFS Volumes (MPIO)
      • 1 Assigned to 1 Node
      • 2 Shared by 2 Nodes
    • 4 SAN Connected RDM Volumes (MPIO)

Environment 1 Process

The first environment had a layer of complexity over the second because I had two independent fabrics. Luckily all servers were configured with MPIO support so I was able to remove one fibre pair, at the cost of performance, add place the servers into both zones.

Preparation

To prepare for the move I created all volumes ahead of time with a slight buffer (1GB) to handle mirroring of volumes. Both SANs and SAN Fabrics remained online during the entire process.

ESX Nodes

  1. Moved 1 fibre channel port to the new fabric (Switch 1) from each node.
  2. Zoned the fabric to allow the hosts to communicate with both SANs.
    1. With the Emprise 5000 Port 1 only needed to see MRC1 on both controllers, since they are plugged in to Switch 1, resulting in creating of 2 zones per server. Zone 1 contained Server1-HBA1/ISE1-MRC1 and Server1-HBA1/ISE2-MRC1.
  3. Created host record on each Emprise 5000 for each ESX node (ESX01/ESX02/ESX03).
  4. Assigned all new volumes to all ESX nodes so that they were seeing the same thing.
  5. Rescanned the HBA in the vSphere Infrastructure Client and verified that all volumes were seen.
  6. All datastore volumes were configured as VMFS volumes; RDP volumes were untouched at this point.
  7. Using Storage vMotion I was able to migrate all virtual servers live from the old datastore to the new datastore. This process completed rather quickly, moving all VMs within about an hour.
  8. With the RDMs I assigned them to their appropriate servers and then followed the process below for Windows Servers starting at step 7.

vSphere’s Storage vMotion proved to be a wonderful tool and worked without issue. There was zero downtime for any virtual server during the entire migration, aside from

Existing Windows Servers

  1. Verified that both fibre channel links were active.
  2. Moved 1 fibre channel port to the new fabric (Switch 2) from each server.
  3. Verified that my redundant paths were no longer fault tolerant.
  4. Zoned the fabric to allow hosts to communicate with both SANs.
    1. This time I zoned for MRC2 on each Emprise 5000, since I am connected to Switch 2, which is where MRC2 is connected for each Emprise.
  5. Created host records on each Emprise 5000 for each server.
  6. Assigned all new volumes to their appropriate server and using Disk Management mmc rescanned and verified that I had all volumes.
  7. I then initialized and converted all new volumes to dynamic disks.
  8. I didn’t want DFS-R to have issues with dismounting volumes so I stopped the ‘DFS Replication’ service at this point.
  9. If the server was a SQL server or other App server I would also stop the appropriate services to prevent I/O during the conversion process.
  10. Since VSS for all volumes is located on a single, dedicated volume (V:\) I decided to convert this one to a dynamic disk first. To do this you must dismount all volumes that are using VSS and this as the storage location using ‘mountvol :\ /P’. This is critical process. Per the documentation I have found if VSS loses communication for more than 20 minutes you will lose all of your snapshots. Bird-dogging this process and following these steps worked very well, deviating from these steps resulted in one server losing all VSS snapshots. VSS snapshots should never be used for a primary backup.
  11. Once all drives were dismount I then converted the disk to dynamic.
  12. After the VSS volume successfully completed I immediately remounted all volumes back to their original drive letters.
  13. I then selected all remaining, existing volumes and converted them to Dynamic Disks.
  14. Now that all of my new and old drives are dynamic I went from largest to smallest and added the new drive as a mirror to the old drive. I did it this way because the mirroring will only show you drives large enough to be a mirror, starting from largest to smallest I would only get one drive as an option each time as I moved down the list.
  15. I was now able to enable the ‘DFS Replication’ service without worry.
  16. After mirroring was completed I right clicked on each original volume in the ‘Disk Management’ mmc and removed it from the mirror. This will leave a Simple volume with only the new volumes being used.

Exchange Server Move

The Exchange server was a much smoother and more straight forward process, since it was not previously SAN connected.

  1. Zoned the fabric to allow host to communicate with the new SANs.
    1. Since I have 2 HBAs going to 2 Emprise 5000s I needed to create 4 zones for this host:
      1. Host-HBA1/ISE1-MRC1
      2. Host-HBA1/ISE2-MRC1
      3. Host-HBA2/ISE1-MRC2
      4. Host-HBA2/ISE2-MRC2
  2. Created host records on each Emprise 5000 for the server, including both HBAs.
  3. Assigned all new volumes to their appropriate server and using Disk Management mmc rescanned and verified that I had all volumes. For each storage group I created a RAID 5 volume for the database files and a RAID 10 volume for the transaction logs.
  4. Initialized and created partitions on all of the new volumes
  5. Using Exchange System Management I then went to each Storage Group and changed the paths of the log files and system directories to point to the new Log volume. Exchange will then dismount the store and move those items to the new location, remounting the store when completed.
  6. In each Mailbox Store for each storage group I then went through and changed the path for the database and streaming database and pointed them at their new location. Again, Exchange will dismount the store, move the files for you and remount the store when completed.

Exchange’s tools made the process extremely easy, though slow, and it was a downtime event for all users of the mailbox store during the process.

Final Steps

After all servers were done mirroring their volumes and the mirrors were broken I was able to use ICON Manager, a XIOTech tool, to verify that there was no more I/O to any of the volumes, if there were I would then be able to go to the assigned server and see what it was still using on that volume.

After I/O was verified I then removed the zoning to the original SAN for all servers and moved their secondary HBA to the new fabric. The fabric was then appropriately zoned and using the MPIO tools verified that I now had 2 paths to each volume.

In the vSphere Infrastructure Client I then went in and ensured that all volumes were set to Round Robin in their path settings to ensure the load was spread across all HBAs.

The only downtime during this process was during the conversion to dynamic disks where it requires the volumes be dismounted during the process and the Exchange migrations. We did have 1 issue during the mirroring where several customers received delayed write failures to the DFS path of their H: drive, where they had rather large PST files located, though no data loss appears to have occurred. Aside from that the migration was successful.

The mirroring was started around 2am on a Wednesday and completed by 9:30am Thursday morning. Total migration process took around 39 hours from start to finish.

1 site down, 1 to go. Only 10 user issues out of 700 and those were due to extremely large (2+ GB) PST files choking with the limited disk I/O and degraded server performance during the migration.

Environment 2 Process

Environment 2 was very similar to Environment 1 with the following differences, which is the only thing that I will cover here:

  • Servers were already setup MPIO and the only change was zoning them for the new fabric, with 2 zones per server as follows:
    • ServerX-HBA1/ISE1-MRC1
    • ServerX-HBA2/ISE1-MRC2
  • ESX nodes did not support Storage vMotion, manual migration of VMs was required and was a downtime event.

ESXi Migration Process

For the ESXi migration I followed the same zoning procedures as outlined previously. The ESXi servers had SSH enabled for remote console access from previous work. You can also use the vCenter Converter, but I found it took longer.

  1. All volumes were mapped from the new SAN to the servers
  2. HBAs were rescanned and verified that all volumes were visible
  3. Configured MPIO on all volumes and set the mode to Round Robin
  4. Formatted all data stores as VMFS
  5. Shutdown all Virtual Machines on a given server
  6. In the Edit Settings dialog I made note of all RDMs assigned to the server and removed them, deleting files from the disk. This is a critical step, failure to do this will result in the RDMs being copied in their entirety to the new datastore, not just the reference to the data store.
  7. From the remote management console, as root, went through each data virtual server folder on each datastore and verified that no RDMs remained. I then copied the virtual server’s folder, in full, to the new datastore.
  8. After copying completed I went into the vSphere Infrastructure Client and browsed the new datastores. Entering in each virtual server folder and right clicking the .vmx file and choosing ‘Add to Inventory’ I then gave it a name such as ‘ServerName-E5K’ to denote that it was now located on the Emprise 5000.
  9. If the server did not have any RDMs this process is done and you can now power on the new VM, choose that it was Moved when prompted and remove the original from the inventory.
  10. If the server did have RDMs you will now need to add the original RDMs back in as well as the new RDMs. You can then start the virtual server and perform data migration from old to new as mentioned above in the section ‘Existing Windows Servers’ starting at step 7.

Final Steps

After I/O was again verified, all zoning to the original SAN was removed. No fibre channel cables needed patched due to everything being connected to the same fabric.

Summary

Because of the ESXi migrations Environment 2 ended up taking 2 full nights of hands-on work. All data was fully migrated within a span of 4 days, only because I waited a day in between the night shifts.

We had 1 issue with our Blackberry BES server which was related to it losing communication with Exchange, which a reboot of BES fixed. Aside from that the only issue reported was from an off-site user who was not notified and could not communicate with an app server for a few hours, which was virtual and being migrated.


Creating a Basic jQuery Slideshow

Thursday, April 1, 2010 1:39:51 PM

I needed a basic slideshow/image rotator with just the features I wanted and nothing more, and I needed a good excuse to play with jQuery. Here's the result of satisfying both of those needs. Look forward to another article that will expand upon this basic slideshow with pausing, text and navigation controls. Until then, enjoy.

To get started we need to create a basic html file, we’ll call it SlideShow.html Here’s how it should look.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
   "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>jQuery Slideshow Demo</title>
<meta name="Author" content="Ryan T. Hilton" />
</head>
<body>
</body>
</html>

Add jQuery from Google by placing this in your <head> section

	<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>

We also need to add some styles to the page. Create a file called style.css and add this to the <head> section.

	<link rel="stylesheet" type="text/css" href="style.css" />

And now we’ll add some basic styles to style.css

	body
	{
		background-color: rgb(45,45,45);
	}
	.Container
	{
		width: 500px;
		margin: 20px auto;
		border: 4px solid rgb(255,255,255);
		background-color: rgb(200,200,200);
		color: rgb(45,45,45);
		position: relative;
		height: 309px;
	}
	#slideshow
	{
		margin: 0;
		padding: 0;
		list-style: none;
		position: relative;
	}
	#slideshow li
	{
		float: left;
		position: absolute;
	}

Now, let’s setup the HTML for the slideshow. Add this between the <body> and </body> tags. Just add an extra <li> for each image.

	<div class="Container">
		<ul id="slideshow">
			<li>
				<img src="Images/Image1.jpg" width="500" height="309" />
			</li>
			<li>
				<img src="Images/Image2.jpg" width="500" height="309" />
			</li>
			<li>
				<img src="Images/Image3.jpg" width="500" height="309" />
			</li>
		</ul>
	</div>

Let’s create a new .js file and add it to the <head> section now as well. We’ll call it SlideShow.js and place it in the same directory as the html file

	<script type="text/javascript" src="SlideShow.js"></script>

Now, we can start with the basics of the JavaScript. The rest of this will be done in SlideShow.js
First lets set up some variables to configure the behavior.

	var fadeTime = 1200; // Fade in/out each image over 1.2 seconds
	var pauseTime = 4000; // Pause on each image for 4 seconds

Now we need to setup a few internal variables that we’ll use to maintain the state of the slideshow.

	var current = 0; // Current index
	var cur; // Stores current item
	var next; // Stores next item
	var timer; // Used for starting/stopping the rotation
	var size = 0; // Stores the number of items to rotate
	var paused = false; // Are we paused?

How about we get started making things work? We’ll start by creating an initialization function. This will get everything setup and start the rotation.

	function StartSlideShow()
	{
		// Hide all of the items
		$('ul#slideshow > *').css("display", "none")
		.css("left", "0")
		.css("top", "0")
		.css("position", "absolute");

		// Display the first item
		$('ul#slideshow li:first').css("display", "block")
		.css("left", "0")
		.css("top", "0")
		.css("position", "absolute");

		// Set size variable to number of items
		size = $('ul#slideshow li').size();
		
		// Start the rotation timer
		StartTimer();
	}

	function StartTimer()
	{
		// Make sure we have a clean slate
		clearInterval(interval);
		// Call Switch() every x milliseconds
		interval = setInterval(‘Switch()', pauseTime);
	}

	function Switch()
	{	
		cur = ($(‘ul#slideshow li’).eq(current));
		// Check to see if we are at the end of the list
		if ((current + 1) == size)
		{
			next = ($('ul#slideshow li').eq(0));
			current = 0;
		}
		else
		{
			next = ($('ul#slideshow li').eq(current + 1));
			current = current + 1;
		}
		// Fade between the images
		cur.fadeOut(fadeTime);
		next.fadeIn(fadeTime);
	}

Ok, we are all done with SlideShow.js for now. Let’s go back to our SlideShow.html file. In the head section, add the following. This will tell jQuery to run StartSlideShow() when the page is loaded.

<script type=”text/javascript”>
<!--
	$(document).ready(function() {
		StartSlideShow();
	});
-->
</script>

You should now be able to run your slideshow by opening the HTML file in your favorite browser.

View Demo
Download


MAS90 ODBC Tool

Friday, February 26, 2010 1:10:26 PM

I've been expanding my support offerings to an old client as of late into the realm of Sage Software's MAS90. It started off with a redesign of their website and the addition of customer data needing to be uploaded regularly to the web site's database from their internal MAS90 installation.

They are currently run MAS90 using the ProviderX ODBC connections. I haven't worked out all of the nomenclature and inner workings yet, so forgive me if I misspeak here.

Not knowing anything about MAS90 when I started I had to leap over a few hurdles to get .Net applications to talk to the database, run queries and tell me the schema of the tables. There may be an easier way of doing this but I was unable to find one, so I broke out some .Net ODBC tutorials, what a mess.

After constantly rewriting and recompiling my test applications to test the different queries I finally got sick of it and wrote a command line interface (CLI) for interacting with the MAS90 database.

Once configured it allows you to throw ad-hoc queries at the database and, if you choose, see the schema. It is a very rough cut but I hope that it helps somebody else out. If not, it has done and will continue to do what I need it to do.

You can download the application here

From the readme:

This is the initial release of my MAS90 CLI. This was developed for testing queries against MAS90 for a customer so that I could learn the internal structure of the MAS90 database tables and extract sample data from them.

To use:
   Open MAS90CLI.exe.config and setup your connection string, such as server name and path to your ProviderX libraries

You can also set default values in the configuration file

run MAS90CLI.exe it should connect successfully and put you at a MAS90> prompt.

Setting Variables:
   Syntax:
      set [variable] = [value]
      set showschema = true
      set recordlimit = 30
   Variables:
      timeout - timeout in seconds for the query to run (0+)
      recordlimit - number of records to return before quitting (1+, 0 = all)
      showschema - prints out the schema for debugging purposes (true/false)

Viewing Variables:
   Syntax:
      get [variable]
      get showschema
   Variables:
      timeout
      recordlimit
      showschema

Running Queries: At the MAS90> prompt type in your SQL query. You may need to reconfigure your console window to fit the data

MAS90> SELECT * FROM CUS_Invoices ORDER BY InvoiceDate DESC

Queries use the standard SQL syntax as allowable via the ProviderX ODBC provider.

You should start seeing the data appear followed by the number of records found and the time the query took to run

Quitting:
   From the MAS90> prompt type in 'quit' or 'exit'

If you have any questions or problems feel free to contact me:

Ryan T. Hilton
Pacific NW Data Consultants
Ryan@pnwdc.com
http://PNWDC.com - http://RTHilton.com


Virus and Malware Cleanup and Protection

Tuesday, September 22, 2009 11:54:13 AM

I have been asked many times recently to clean up computers that have been infected with various Trojans, Virii, Worms, etc. They all end about the same way, me making a house call.

Now, while I am usually willing to help people sometimes it just makes sense for people to help themselves some. Hopefully this will help you.

Recommended Software
I recommend AVG for day to day anti-virus software. They seem to stay up to date with signatures, it is free for home use and usually takes care of most of the threats I've come across.

For cleanup I also recommend MalwareByte's Anti-Malware. I have found it to clean up several trojans that nothing else seems to touch

Cleanup Instructions
One common theme I have found with removal of malware is that it needs to be performed in a special bootup mode called Safe Mode. Safe Mode allows Windows to load a minimal amount of resources and limit what starts up automatically, including most pieces of malware and their self-defense mechanisms. To boot into Safe Mode follow these steps:
  1. Shutdown Your Computer
  2. Press the power button on your computer
  3. You should now see the BIOS loading, this usually has your computer manufacturers logo, hardware information or both.
  4. Press F8 repeatedly until you are prompted for how you would like to start Windows - If you see the Windows loading screen then you will need to try again.
  5. From the list select 'Safe Mode with Networking'
  6. You should now see many lines of text scroll across your screen rapidly followed by Windows starting up and telling you that you are in Safe Mode

From Safe Mode, download and install both products. These should be installed immediately after you receive a new computer but since you are reading this article chances are that wasn't done. Not to worry, we will do this now.

After you have installed both products first run AVG, have it update the virus definitions from the internet and then run a full scan. Any items discovered should be selected and deleted.

Now, repeat the above step for MalwareByte's Anti-Malware. This product is key for removal of some items such as 'AntiVirus 2009' and 'VirusShield 2009'. To date this is the only program that I have found to affectively remove these programs.

Your computer should hopefully be clean now and running much faster. For some added benefit you should also defragment your C: drive.

If you have any other ideas, feel free to share them in the comments.

Custom Thread Manager In C#

Friday, June 19, 2009 12:59:31 PM

I was recently working on a project that required me to limit the number of background threads that could be executing at any given time. It could be my lack of knowledge of how the built in .Net classes work for managing thread execution, but I decided to embark on a quest to work up my own solution to this problem. The below code was hastily written to solve the problem at hand, but you should be able to see the underlying concept I used. Bare with me.

First, I needed some thread parameters so that I could pass these to my scanner. Here is the class that I created to handle this. Name is the name of the Domain Controller we are going to query.

public class ThreadParams
{
    public string BaseOU = "";
    public string Name = "";
    public ThreadParams(string baseOU, string name)
    {
        BaseOU = baseOU;
        Name = name;
    }
}    

Next I created a class that acted as the shell of my scanner, called LastLogonScanner.

public class LastLogonScanner
{
    private System.DirectoryServices.ActiveDirectory.DomainControllerCollection _domainControllers;
    public System.DirectoryServices.ActiveDirectory.DomainControllerCollection DomainControllers
    {
        get
        {
            return _domainControllers;
        }
        set
        {
            _domainControllers = value;
        }
    }
    private System.DirectoryServices.ActiveDirectory.Domain _domain;
    public System.DirectoryServices.ActiveDirectory.Domain Domain
    {
        get
        {
            return _domain;
        }
        set
        {
            _domain = value;
        }
    }
    public LastLogonScanner()
    {
        _domain = GetDomain();
        EventLogger(String.Format("Enumerating DCs for domain {0}", _domain.Name));
        _domainControllers = GetDCList(_domain);
        EventLogger(String.Format("Found {0} DCs", _domainControllers.Count.ToString()));
    }
    public System.DirectoryServices.ActiveDirectory.Domain GetDomain()
    {
        return System.DirectoryServices.ActiveDirectory.Domain.GetCurrentDomain();
    }
    public System.DirectoryServices.ActiveDirectory.DomainControllerCollection GetDCList(System.DirectoryServices.ActiveDirectory.Domain domain)
    {
        return domain.DomainControllers;
    }
}

Now, here is where the real meat of it lies. For each DC it will create a thread. Then it will monitor each thread to see if it has started, if it has not been started and there are less than 5 running threads, start up a new thread and pass the base OU and the Domain Controller you wish to scan to it.


public void GetLastLogonTimes(string baseOU)
{
    List<Thread> threads = new List<Thread>();
    foreach (System.DirectoryServices.ActiveDirectory.DomainController dc in _domainControllers)
    {
        Thread t = new Thread(new ParameterizedThreadStart(GetUsersForDC));
        t.Name = dc.Name;
        threads.Add(t);
    }
    while (true)
    {
        bool threadsActive = false;
        int runningCount = 0;
        int pendingCount = 0;
        foreach (Thread t in threads)
        {
            if (t.IsAlive)
            {
                runningCount += 1;
            }
            if (t.ThreadState == ThreadState.Unstarted)
            {
                pendingCount += 1;
            }
        }
        if (runningCount < 5)
        {
            if (pendingCount > 0)
            {
                foreach (Thread t in threads)
                {
                    if (t.ThreadState == ThreadState.Unstarted)
                    {
                        ThreadParams tp = new ThreadParams(baseOU, t.Name);
                        t.Start((object)tp);
                        break;
                    }
                }
            }
        }
        foreach (Thread t in threads)
        {
            if (t.IsAlive)
            {
                threadsActive = true;
                break;
            }
        }
        if (!threadsActive)
        {
            break;
        }
    }
}

A bit of a kludge, I know, but it was very affective.

You can download the full program here.


Server 2003 Pre-R2 Domain-Wide Last Login Scanner

Friday, June 19, 2009 12:13:49 PM

Looking for inactive accounts in your domain? The solution might be more difficult than you think, but I have the answer. I was recently tasked with providing a means to scan a domain of  almost 100 domain controllers (DC) to see which accounts were 30, 60 and 90 days without a login. Because we were not yet at a full 2003 R2 domain functionality level we could not take advantage of the LastLogon attribute directly.

For those who aren't aware, prior to Windows Server 2003 R2 the LastLogon schema attribute was not a replicated field. The solution that we came up with would be to connect to each DC in the domain directly, enumerate the list of users, retrieve their last logon time and then perform a comparison to find the most recent entry across all DCs. Here is a run down of the logic:

  1. Connect to local DC
  2. Enumerate list of all DCs in the domain
  3. Close connection
  4. For each DC discovered in step 2, create a connection to it
  5. Query all users and their LastLogon attribute
  6. Create array and store data from step 5 into it
  7. Close connection
  8. Compare arrays and find the most recent login for each user
  9. Output data to CSV file
  10. Manually analyze the data or write another program to do same

Pretty basic, though there were some pitfalls that I encountered. One of them was I did not want to hammer the network, so I needed to limit my number of simultaneous connections. We decided to only connect to 5 DCs at a time.

You can download the code and take it for a spin here.


DFS-R Fails To Replicate Some Files

Tuesday, October 21, 2008 1:14:44 PM

DFS-R fails to differentiate “long” filenames with filenames that appear as 8.3 short names. While this is rarely an issue, and even less so as our older files become obsolete, I have seen it happen in a couple situations. For example, take these two files:

   Notification Procedures.doc
   Notifi~1.doc

To reproduce this fully, create a new folder on a replicated volume. Create a new document with the first filename and wait for it to replicate. Now, from the command prompt, type in ‘dir /x <Source Directory>’ and note the 8.3 filename is “NOTIFI~1.DOC”. Type in the same command but use the destination directory this time. You will note that it, too, has the same 8.3 filename. Next we need to create the second file. Create a new document and save it as “Notifi~1.doc”. You should not see this file replicate. If you now run the ‘dir /x’ command for the two directories you will see that the first document on the source has been changed to “NOTIFI~2.DOC” however on the destination this change has not been made. DFS will note that the file exists in both locations and no replication will occur.

Again, this isn’t an issue unless you have a large collection of legacy filenames especially those that only maintained their short name.

One workaround that I have found, which will work after the fact, is to disable 8.3 filename creation on your NTFS volumes. You can do this on Server 2003, XP and Vista using the command ‘fsutil.exe behavior set disable8dot3 1’. For Windows NT and Windows 2000 you need to make a registry change. For more information on this change you can reference this Microsoft article.



Past Articles: