My photo
Auckland, New Zealand
Smurf sized geeky person with a penchant for IT, gaming, music and books. Half of industrial duo 'the craze jones'. Loves data, learning new things, teaching new things and being enthusiastic.

Wednesday, 30 September 2009

Dynamics CRM - minor bugette workarounds

1. When checking for updates you get an error. No problem, hit the back button and try again and it should work the second time.
2. It wants to install visual C++ runtime every time, this is a known issue - doesn't take too long to reinstall it each time thankfully.
3. Install thinks SQL Full Text Search is not installed - make sure you have the latest update of Dynamics CRM before installing, it's a known bug that has been fixed in a patch (of course, you should also make sure that you've started the service).


Yippie ki-yay

I have finally made it past the verification screen. Dynamics CRM is now installing on the virtual server. On the plus side - all this mucking around has given me a good idea of what to double and triple check before even trying to start the install. :)

1. Make sure you have a Dynamics CRM AD group setup with permissions. The network support guys really don't like letting you have access to their servers so this is a good alternative solution. Get them to set up a group and give you permissions to that.
2. Install SQL Server 2008 with everything installed.
3. Make sure IIS is installed and running
4. Make sure Windows Server is totally up to date with its patches
5. Head on in to control panel, administrative tools, services - make sure Remote Procedure Call (RPC) and Remote Procedure Call (RPC) Locator are both started.
6. Open SQL Server Configuration Manager - make sure all the services are started and using an account with access rights. I used NT Authority\NetworkService as this can be selected as an option during the CRM install. Anything but LocalService - you do not want to use that.
7. Open Reporting Services Configuration Manager - connect to the report server instance, click on Service Account and make sure that is using an account with access. Again, I set this to use Network Service. As with the previous config, don't use Local Service.
8. Whilst in Reporting services config, click on Encryption Keys and select Change (I didn't have any keys in place so if you're using an existing database you probably don't want to do this, however, if you do have an existing database, you might want to get a backup of the key at this point).
9. Whilst in Reporting services config, click on Web Service URL - click on the URL link and copy it to notepad - you'll need it later.
10. Next, you need to know your email router information - tech support can generally help with that if you don't know what it is.
11. Jump back to the services screen and check that everything is running, SQL Full-text Filter Daemon Launcher may be stopped, make sure this is started and using Network Services.
12. Get your license key ready... start the install....


Battling away

The Dynamics CRM installing is still going on.... this has to be the hardest install of software ever. It was easier installing Red Hat ten years ago.

This is a completely fresh install on a nice new Windows 2003 virtual server. SQL 2008 is installed, IIS etc... basically all the requirements from the planning doc are there, I spent several days making sure they were there, and ALL services are running with the correct access rights including the full text search, but can I get past the verification screen on the CRM install? Nope. For some reason it thinks the full text search is not installed. I'm getting to the point where banging my head repeatedly against the wall may be more productive than doing this install. I'm losing a lot of time here that could be spent on other work.

Wednesday, 23 September 2009

DynamicsCRM

This week's missions:

Mission 1
Create a DynamicsCRM environment for peeps at work to play with. The plan is, 1 x virtual CRM server on Windows Server 2003 o/s, 1 x virtual SQL server (might just include that on the CRM server), 1 x virtual MOSS server (again, will include on the CRM server), 1 x virtual client that people can copy and destroy as they see fit. It's going to be hooked in to our actual domain/AD server. Still stuck on installing the server updates for the base o/s system.... could take a while this.

Mission 2
Once we have signed our partner agreement, grab a copy of the CRM vpc that can be used for demos - very handily created by Microsoft so that it's all set up and ready to go with every option you could possibly think of adding to CRM. Jump on to Amazon and set up a 64-bit Windows 2008 server, stick the virtual demo on the cloud based server and see how we go running the vpc in the cloud. This one could be fun. :)

Friday, 18 September 2009

Database management via Visual Studio

Notes from Greg Low's talk at TechEd NZ '09.

  • Visual Studio Team System database edition has merged with developer edition
  • Now called Visual Studio Team System: DB Pro (VSTS:DBPro) - a.k.a. 'DataDude'
  • Released initially as SQL Server 2005 edition
  • 2008 edition of VSTS:DBPro does NOT use SQLS 2008 database
  • GDR release provides SQLS 2008 support
  • GDR2 recently released
  • T-SQL is re-parsed
  • T-SQL parsing dlls can be incorporated into your own apps
  • Should be more extensible in VS2010
  • Gives great control over database projects
Project Management
  • Model based development
  • Team collaboration - TFS, Workitems, Tasks
Change Management:
  • SCCI source code management integration
  • Refactoring
  • Schema and data comparison tools
Testing
  • Database unit testing
  • MSTest integration
  • Automated data generation system
Build/Deploy
  • MSBuild integration
  • Command line tools
  • Allows for multiple inconsistent target systems
  • Build & deploy phases have been separated
  • All important tasks are scriptable
Project system
  • Offline development
  • Stored in .sql files
  • Reverse engineer existing projects is often easiest method
  • Projects can be included in other Visual Studio solutions
  • Projects relate to a specific database
Refactoring
  • Allows for cascading changes within a database
  • Also updates dependent project objects, i.e. schema, data generation plans, unit tests, sql scripts
More info
  • blogs.msdn.com/gertd
  • blogs.msdn.com/vstsdb
  • blogs.msdn.com/bharry

Sharepoint & SSRS Integration

Notes from Ian Morrish's talk at TechEd NZ '09.

SQLServer 2005 R2 - deep sharepoint integration
  • Light up reporting experience (that's my note, and no, I don't know what I meant here either...)
  • Report server in sharepoint mode
  • Reporting services sharepoint add in
New in SQLServer 2008
  • Data driven subscriptions
  • Support for URL parameters
  • Support for RS management tools in sharepoint mode
Integration benefits
  • Single user interface
  • Use SP deployment topologies to distribute reports
  • SP features such as workflows, versioning, collaboration are available
  • Reporting authoring tools can publish direct to SP
  • Report subscriptions can be delivered via SP
  • Reports are executed in report server to leverage all its enterprise capabilities
Limitations
  • No report manager
  • No linked reports
  • No Sharepoint SSO
  • Anonymous enabled web apps are not supported
  • Default zone only
Architectural decisions to make
  • Sharepoint topology
  • SQL topology
  • Security - NTLM / Kerberos
Misc Notes
  • Demo samples available from Codeplex
  • Need to install sharepoint object model on report server and WSS/MOSS server
SP Config options
  • After installing SSRS module on Sharepoint, an option appears in sharepoint to configure reporting services
  • Need to grant access between sharepoint and SSRS
  • If you enter an incorrect username/password it won't give you an error, will just return to the entry page again. If you have entered the correct username/password it will take you back to the main config options screen
  • Set up reporting services server defaults
  • Activate report server feature library
  • Create document library
  • Allow management of content types
  • Security errors can often be fixed in RSReportServer.config


Building Applications on SQL Azure

These are my notes from Jeremy Boyd's presentation at TechEd NZ '09. These notes are pointers for finding out additional information and as such do not give full details of any particular areas of SQL Azure.

  • Initial services coming with SQL Azure - RDBMS, Data Sync, Data Hub
  • Down the road they will add more services such as Reporting Services and Analysis Services
  • Databases limited to 10GB each
  • SQL Azure is not SQL Data Services - they are two different things. SQL Data Services no longer exists and the tasks it was achieving are now part of Windows Azure.
  • SQL Azure has a familiar SQL relational model
  • Virtual DB server
  • Auto HA and fault tolerance
  • Friction-free scaling
  • Self provisioning
  • Provisioning model: Create an account, add a server, add a database, connect & play OR you can create via SSMS
  • All standard T-SQL language minus a lot of DBA stuff
  • When connecting ignore the error message - it's irrelevant and doesn't stop you connecting.
  • All tables MUST have a clustered index on SQL Azure
  • Can target SQL Azure either remotely from on-premise application, from an application on Windows Azure (fastest option), or alternatively use SQL Azure for storage.
  • V1 of SQL Azure does not support partitioning, so use sharding. Sharding is where you use several databases to store portions of the data. Same schema used across all databases.
  • SQL Azure is currently in CTP1 with CTP2 due soon with better tooling
  • Free to use until launch date sometime in November 2009 - so get playing
  • Check it out at http://connect.microsoft.com
The following items are missing:
  • CLR
  • DB File Placement
  • DB Mirroring
  • Distributed queries
  • Distributed transactions
  • Filegroup management
  • Full text search
  • Global temp tables
  • Spatial data & indexes
  • SQL Server config options
  • SQL Server service broker
  • System tables
  • Trace flags and quite a few other items
  • Check MSDN for command by command break down of what is and isn't supported.
Migrating an existing DB to SQL Azure
  • ANSI_NULLS not supported
  • PAD_INDEX not supported
  • Can't specify row locks
  • Can't specify filegroup reference or partitioning
  • Codeplex project which copes with this and updates your script: SQL Azure Migration Wizard
  • Check all tables have clustered index before migrating (codeplex tool will do this)
  • Should be able to migrate DB without too many hassles
On-premise scenario
  • Data is located outside the firewall from where the app runs
  • In NZ we need to be espcially vigilant about handling latency
  • Expect poor latency when using the on-premise scenario and factor caching into the design of your application
  • Favour chunky calls rather than chatty calls
  • When caching, don't hold on to the data so long that it becomes stale, use query notifications - if data changes, update cache
Better option
  • Build an app - deploy to Windows Azure with SQL Azure as the backend
  • This gives you a lower latency, however, as good programming practice, you should still factor caching into the design of your app
Synchronization
  • SQL Azure - great data sync point
  • High Availability
  • Scaleable
  • Sync framework = Project Huron
  • Mobile device access can make good use of synchronization
  • Project Huron - only available server side at the moment, client side bits coming soon
Database size 10GB
  • Includes: Primary replica data, objects and indexes
  • Does NOT include: Logs, masterdb, system tables, server catalogues, additional replicas
More info
  • More info on up coming project additions: http://www.microsoft.com/azure/sqllabs.mspx
  • http://blogs.msdn.com/ssds
  • To sign up for the CTP: http://msdn.microsoft.com/en-us/sqlserver/dataservices/default.aspx

Tuesday, 15 September 2009

Sharepoint notes from Matt Velloso

These are some notes from Matt Velloso's session at TechEd 09 on the 14th Sept. These are just my notes created whilst watching the session and they can be used as a start point to find out more information.

Myths to debunk:
  • You can change meta data easily. Well, yes you can, however, that doesn't mean you should. So don't. If you need to change your meta data, plan it out first and make sure that your changes don't break another part of the system.
  • There's no code so it's risk free. Cobblers. What do you think runs Sharepoint if it's not code? Always manage risk and plan for various eventualities.
  • It's an out of the box app, we don't need to test. Sigh.... just test it already and get over the 'testing isn't necessary' mentality.
Blobcache
  • Stop going to SQL Server all the time
  • Use Fiddler2 (HTTP debugging proxy) to check the status of your blobcache
  • Enable blobcache in web.config
  • How to break your blobcache: reset IIS, hit refresh on your page. You need to tell the service not to use the sharepoint DLLs. web.config change in http modules section. Remove all http modules that are sharepoint dlls and then do an IIS reset. Reason why it was broken? You lock your resources when you refresh as services run on a FIFO basis.
Content Types
  • When fixing/updating, don't uninstall the feature. Migrate properly to ensure all data types are updated correctly, including historical items.
  • Do things in an incremental manner
  • Script changes in .NET if they are major changes
Timer Job
  • Issue: Timer job service randomly stops. Why? Log on as a service policy was enforced by domain policy to all machines, including servers.

Other notes
  • Absinthe is a very cool tool for hacking databases via a web app. Uses SQL injection.
  • Full deployment deletes site and redeploys - DON'T do this after your first deployment. Use Incremental deployment instead. And if your incremental deployment isn't working then fix it!
  • Lists vs. Tables - make sure you use the correct tool for the job, i.e. use tables for complex data relationships and use lists for workflows. It's okay to use a mixture of both.
  • List performance - do NOT use GC.collect()
  • Check out tool: sp_disposecheck


Overview of Azure

Notes from Chris Auld's presentation at TechEd 09. Again, as with the other TechEd blogs, these are just notes that I can use to find out further information when more time is available. :)

Azure:
  • High scale application architecture
  • Consolidate traditional IT to the cloud
  • Move servers and apps out-house
  • Reliable hardware in the cloud
  • Virtualize to the cloud
  • Manage explosive growth (scale out cloud)
  • Scale out clouds are built around disposable hardware
  • Reliability is built using software
  • Scale out cloud is load balanced by default
  • Greeness - PUE (power usage effectiveness) = Total Facility Power / IT Equipment Power. Google and Microsoft are getting around 1.10 to 1.25 PUE. Intergen server room is running at about 1.6 PUE.
  • MS cloud offering = Windows Azure, .NET Services, SQL Azure
  • Azure sits you above the abstraction layer (IMAGE TO GO IN HERE)
  • Compute source (IMAGE TO GO IN HERE)
  • Load balancer is key part of Windows Azure
  • RoleEntryPointStart() has no return value. Always while True.
  • VMs are cloud optimised
  • VMs are running 64 bit Windows Server 2008
  • Each VM has one to one relationship with processor core
  • Developers have a desktop environment available that simulates the cloud locally
Storage in the cloud
  • HTTP(S) via REST services into cloud storage
  • Scales out across server farms consistently
  • Blobs are addressable as URLs
  • Tables are persistent dictionary, not relational
  • Queues link worker role and web app role
  • Can access Azure storage from any app that can get through using HTTPS via port 443
  • Only port 443 at the moment
  • Horizontal data partitioning
  • Have to nominate a partition key
  • Items with same partition key stored on same partition
  • Items with different partition keys MAY be stored on different partitions
  • MUST access via REST, can't use ADO.NET
  • SDK provides some useful tools
  • No SQL - no real joins/aggregates; limited indexes; no schemas; no referential integrity
  • Can't easily move relational database to the cloud
  • Will scale out MASSIVELY
  • Don't need prior knowledge of how many partitions will be required
  • Queue - receive work in to the web role, it then writes message to the queue, worker role processes the message and then deletes it. If message not done, it is re-added to the queue and processed again. Need to watch out for messages being duplicated.
  • May only need 10 instances for 24/7/365, but it's easy to switch to 1000 instances for the 5 days a year that you may need that many.
  • Great for start up businesses as there are no requirements to buy server hardware anymore, you can get up and running very quickly in the cloud
SQL Azure
  • True relational database management source
  • Pared back, so missing some things, i.e. SSAS, SSRS
  • "Huron'' data hub
  • Accessed via port 1433 using tabular data stream
  • Sticky, stateful load balancer
  • Database spread across a minimum of 3 servers
  • 10GB is currently the largest database size in the cloud
  • If you put too much load you'll get errors - so if you have a large DB, partition it.
  • Need to deal with partition code inside your application
.NET Services
  • Inter application message broker
  • Provides access control service / claim mapping
  • Provides service bus
  • Cloud based intermediary between clients and internal applications
  • Provides service registry that finds services
  • Quickly establish bi-directional communication
  • Direct connectivity libraries with NAT probing
  • Access control service implements security token service (STS) in the cloud. Accepts new token and issues another as claims outgoing may be different to incoming claims.
  • Admin can define rules for claim transformations.
What others are doing
  • Amazon/Mosso - pay as you go - you have to set up and maintain your own servers in the cloud.
  • Microsoft/Google/SalesForce - pay as you go - vertically integrated, no server setup/maintenance required
  • VMWare/Appistry - Buy up front - set up and maintain your own servers
  • Amazon are currently the market leaders
  • Amazon use the Elastic Compute Cloud (EC2)
  • Amazon - VMs that let you run Linux or Windows, have to patch/maintain your own servers in the cloud
  • Amazon's cloud offering is highly flexible
  • Google app engine supports Java/Python only
  • Google you can only execute code for 30 seconds at a time
  • Google uses non-relational, scale out storage
  • Salesforce (SFDC) uses the SFDC defined language
  • SFDC is used for data driven apps
  • SFDC uses non-relational, scale out storage
  • SFDC has no dedicated processor instance
  • Windows Azure + SQL Azure = cheapest HA offering available at the moment
Before moving to the cloud:
  • Think about how you'd partition your data
  • Follow high scale application architecture guidelines


Master Data Management in Kilimanjaro

Notes from the TechEd 09 presentation, presented by Rob Hawthorne.

  • MDM was going to be in Sharepoint 2010 but is now in Kilimanjaro (SQL Server 2008 R2)
  • MDM is NOT a transactional system
  • MDM implements rules before data is loaded
  • Good tool when there is difficulty coordinating multiple systems
  • CTP and TAP in second half of 2009 (CTP3)
  • www.microsoft.com/mdm
  • Any application can contribute, any application can consume, process is key
  • Master data and reference data are treated as the same thing
  • Can coordinate multiple systems, i.e. JDE, SQLServer, DB2 etc...
  • Performance point planning no longer exists, can do similar work in MDS but with more focus
  • When creating new project, plan for growth but start small
  • Uses model (cube) deployment
  • Sharepoint integration is available
  • MDM will hook into Dynamics CRM
  • MDM methodology - Envision, Plan, Develop, Stabilize, Deploy
  • Process is key; must get proceses right first before building anything

Query Optimisation

Notes from Code Camp on 13th Sept. Just a few notes that will give me some ideas to check out and use to create some more beginner level presentations.

Presented by Mark Souza. Slides will be available on aucklandsql.com at some point soon, probably once TechEd 09 has finished.

  • DMV - dm_exec_query_optimizer
  • DBCC SHOW STATISTICS - find equivalent dmv/SQL 2005+ code
  • AUTO_UPDATE_STATISTICS - turn off on high data turnover system using sp_autostart off
  • Play with sample sizes, especially during development stage before going to prod. Will 10% sample size give you different statistics to a 2% sample size?
  • sp_create_plan_guide_from_cache

Virtualization

As with my other TechEd 09 notes, this will be a list of notes that I made during the presentation. These will give a start point to find out further information.

Virtualization - presented by Rob Reinauer at Code Camp on the 13th September.
  • Find out more info on how anti-virus software works with VMs and are the AV companies planning any new methods to handle VMs?
  • Demo of Hyper-V Live Migration in SQL Server 2008 R2 (Kilimanjaro):
  1. Copies pages to memory
  2. Copies delta changes
  3. Repeats copying delta changes down to smallest change found
  4. Freezes environment
  5. Copies everything to destination child partition
  6. Takes down source child partition
  7. Brings up destination child partition
  • Freeze downtime is generally 2-4 seconds
  • Connections are maintained throughout process
  • Users will not notice the change of child partition due to maintained connections, they will most likely not even realise they have been moved
  • In SQL11 the bulk of false failovers will be removed
  • System Centre Virtual Machine Manager is very nice

Orinoco - complex event processing

As with my other TechEd 09 notes, this will be a list of notes that I made during the presentation. These will give a start point to find out further information.

Orinoco:
  • Search online for 'stream insights' as this is the alternative name for project Orinoco
  • Streaming data without persistence
  • Continuous and incremental processing of event streams from multiple sources based on declarative query and pattern specs with near zero latency
  • Process real-time in a flow flow before it even gets to the database to speed up result processing time
  • CEP engine - mine & design
  • Queries look for warnings in events that come across
  • CEP engine can sit anywhere in the process

Madison - High end scale out for data warehousing

As with my other TechEd 09 notes, this will be a list of notes that I made during the presentation. These will give a start point to find out further information.

Project Madision:
  • 100's of terabytes
  • FastTrack reference architecture - guarantees consistent throughput for high scale data warehouses
  • Massive scale out to 100's TB
  • Developed by DatAllegro, MS bought them a year ago and are now incorporating their software into Kilimanjaro
  • Large tables are hash distributed
  • Small tables and dimension tables are replicated
  • Create DB is now a simple script for DBAs. This script triggers a system script that will set up an optimally created database.
  • Create table/partition is now combined in one statement

Kilimanjaro

Kilimanjaro is the code name for SQL Server 2008 R2. This version of SQL Server 2008 contains 6 major new components:
  • Gemini - self service business intelligence
  • Synthesis - application and multi-server management
  • Madison - High end scale out for data warehousing
  • Orinoco - Complex Event Processing (CET) - streaming data without persistence
  • SSRS - Report builder enhancements and Sharepoint integration
  • > 64 core support
Kilimanjaro should be available in the 1st half of CY10.


Gemini - new in SQL Server 2008 R2 (Kilimanjaro)

I'm currently at TechEd and thought it may be useful to blog session notes, both for myself as notes to refer back to, and just in case anyone else out there is interested in any of the topics. The notes are just that, notes... some of these I need to drill down in to when I get back to work to find out more information, but hopefully if this is a topic you are interested in the notes will give you a start point to drill down further as well. There may be some repetition of notes if I've picked up on an idea again later in the talk.

There are a lot of notes, so I'll split the blog posts by subject. This one is about project Gemini. This is the new self service BI tool that will be included with Kilimanjaro. These notes are combined from the Code Camp presentation on Sunday 13th and from Mark Souza's presentation at TechEd on the 14th.

Gemini Notes:
  • Slides should soon be available at www.aucklandsql.com (give Dave a few days to get these uploaded).
  • http://blogs.msdn.com/excel/archive/2009/07/14/sneak-preview-of-project-gemini.aspx.
  • In-Memory Database.
  • Managed self service BI.
  • Accessed from within Excel 2010 after installing Gemini plugin.
  • Reports in other places, such as SAP? You subscribe to the report and it imports the report data into your Excel spreadsheet. After importing data, you set up a relatioship link between existing tables and newly imported tables.
  • Tables show as tabs in Excel.
  • Data Feed - similar to RSS Feed but with data. Uses same icon as RSS feed.
  • In-Memory Database allows 100's of millions of rows of data to be stored in Excel with minimal memory cost. This is due to storing in a columnar fashion and compressing the data. When stored in columnar fashion data can be substantially compressed unlike row storage, this is because columns are more likely to have repeated data that can be compressed to a single entity.
  • Speed of In-Memory Database is very impressive. Example used was applying a normal Excel column filter to reduce from 130 million rows to 2 million rows in less than 1 second.
  • Excel 2010 only shows columns that are in use - if you want more columns you add as required.
  • Can relate multiple data sources together at table level.
  • When importing data from various data sources you can use Excel filter technology to fine tune which data you want to bring across.
  • 101 million rows, including Excel overhead = 600mb.
  • Publish direct from Excel to Sharepoint.
  • Data in Sharepoint is no longer static. It can be refreshed either as a forced refresh or on a regular schedule as determined by the document owner or the IT maintenance peeps.
  • sumx/countx - new options that allow you to sum/count data from a tabular source based on an expression, i.e. =sumx(RELATEDTABLE(table),table[column]).
  • Multiple data sources importable into Excel, including but not limited to: Access, SQL Server, SSAS, Azure, Oracle, Teradata, Sybase, Informix etc...
  • Use 'slices' to create interactive data analysis app in Excel.
  • When published to Sharepoint, users don't need to have Excel installed on their local machine.
  • Sharepoint 2010 - theatre and carousel effects are very cool. :)
  • Gemini runs ''in process'' with Excel.
  • SSRS can link via a connection string to xlsx file that contains data.
  • Use MDX queries in SSRS and SSAS.
  • Sharepoint moves Excel data blob to SSAS.
  • When moved to Sharepoint, 100's or more users can now access the Excel data at the same time.
  • IT services or owner of doc can control the refresh rate of the data. This means we now have live data available in Sharepoint.
  • Once workbook published to SSAS via Sharepoint, it is now available to other tools, for example SSRS. In Sharepoint click on the document and in drop down select ''Edit in report builder".
  • Until release of SQL11, the Excel data blob will only be available as cube data that can be queried and not as a full blown cube that can be related to other cubes.
  • Things that are not there yet: Can't yet use Excel to create SSAS/Visual Studio projects as a start point for SSAS work. Can't yet create full-blown cubes.
  • Suggestion from Greg Low: Be a great idea when sitting with business users to use Excel as a prototype tool to figure out what they want. You then take your prototype query only cube back to project team to create fully featured SSAS project.


Thursday, 3 September 2009

Maps update

YAY! My funky office floor plan map is working a treat. Now to add some dynamic interaction features. :-)

Google Maps API

For those who don't know, Fronde are the sole NZ partner for Google apps until at least the New Year 2010.

Most of the Google work so far has been undertaken by the team down in Wellington, apart from the very shiny Track-U application for the Blackberry created by the Java devs here in Auckland. We wanted to get more people involved up here in Auckland so we're assigning ourselves internal projects. We're all taking a specific area to make that our specialist subject as there are a large variety of Google apps, most of which are quite large when you start to really get into them to see what you can do with them.

I'm currently looking into the maps API and it's proving quite interesting. Three of us are creating an interactive floor plan for both floors of the office, we're then going to hook it into MOSS. I've created the tiles for both floors, that was the easy part and they did show up beautifully, then came a bit more of a challenge with setting up switching between floors. I've now got a lovely set of buttons showing up and they switch between floors, but the tiles have now disappeared. Mission today - get the maps back. 1 step forward, 2 steps back.... We'll be very good at this by the time we've finished this task we've set ourselves.

Whilst I'm working on the underlying map set up, Ryan is creating some gorgeous images that I can use as maps and Gwen is working on the overlays code, figuring out how overlays work and how best to implement them. Gwen is also our Sharepoint guru, so she'll be leading us when we get to the part where we want to hook into MOSS.

As well as doing all of the basics, we're also looking to set up some dynamic interaction with the map. At the moment I can set up to interact with an excel spreadsheet so that you can select people from a drop down list and find them on the map, this is nice and easy for an office admin to update when staff join/leave, however, the eventual aim would be to use AD, we just need to figure out how to do that. That way if someone joins or leaves, the map is automatically up to date when they're set up in the system. Groovy eh? :)

Anyway, better get back to it.... :)