We believe that clients should only convert as much history as they NEED, not as much as they WANT. This advice is contingent on the client having access to their prior system. From a financial accounting perspective, it’s important to bring over the current year summarized or individual transactions. If prior year comparative reporting is needed, then the prior year transactions will need to be brought over. Beyond that, the returns diminish. Why? Because the human cost of having to balance these transactions outweighs the benefits received.
Consider hiring a consultant for the project manager when:
Whichever route is chosen, there should only be one leader.
At a minimum, they should be in charge of the product and ensuring that it operates as advertised during the evaluation phase. Some software companies have service divisions that help during the implementation. This help can come in the form of data conversion services and/or training. For larger and more complex implementation efforts, it may be better for an independent or in-house person to be in charge of your project. At the end of the day, the employee of the software company will be representing their own company’s needs and not the client’s.
Yes, there are – but not from Lupine Partners. The reason these canned plans are not a good idea is that EVERYBODY’S implementation is different. Do not accept a software vendor’s canned work plan. Companies should insist that vendors customize their approach to their specific needs. However, canned work plans can be a starting point for creating a custom plan.
If there are a relatively small number of records to be recorded, then a manual approach is a consideration. The answer will depend on the amount of time used in creating the electronic conversion protocol versus the benefit received (i.e. time saved by not doing it manually). However, with the tools available today, the electronic approach is usually the right call.
Orchestration is a key. You have to keep the ‘traffic’ moving while you build a new highway; therefore, the go-live procedures and processes have to be orchestrated as many times as needed prior to the actual ‘switch-over’ date. This will reduce risk and anxiety and the amount of down time during the migration to the new system. Bottom line: operations should not grind to a halt.
No, but we have had experienced projects that were more difficult than they needed to be.
It depends on the data. The conversion of General Ledger history, for example, will usually start at the beginning of the implementation. This is because there is generally between 12 and 24 months of history that needs to be converted, and it can take a while to validate all of this data – particularly if the chart of accounts has changed between the two systems. Year-to-date vendor payments are typically converted after go-live, but before January 31 of the next year – the due date for 1099s. Variable operational data (e.g., tenant balances) should be done during the “go dark” period while end users are being trained. Static data – data that doesn’t change (e.g., units) – can be completed any time during the implementation process. However, because time gets tighter as go-live approaches, it should get done sooner, rather than later.
If this is a mid-year conversion, is it better to bring over year-to-date vendor payments for purposes of producing 1099s out of the new system, or to produce two 1099s -- one from the new software product and one from the old software product?
It’s easier to do two 1099s, because there aren’t any conversion ramifications. (Simply print a 1099 from each system.) This is fine with the IRS. Remember that 1099s are not part of the company’s mission - it is a reporting requirement by the IRS with a $50 fine per vendor if not done. If you decide to produce the 1099s for the entire year out of the newer system, then the best use-of-time strategy is to wait until after the implementation is done to import and validate the data. It’s not necessary to have imported year-to-date vendor payments as part of a core go-live strategy.
The data conversion should be orchestrated until it is right - whether it takes 1 time or 20. Generally, it usually takes 4-5 iterations to get everything down correctly. Time is of the essence when companies are going live, and it is critical that the routine works. They don’t want to be messing around with import scripts when they are without a system for two or three days.
This approach has some merit. It allows companies to enter or import a minimum amount of information required in order to save a tenant record in the new system. Once a tenant record is saved, the system assigns a tenant ID. When tenant IDs are established, work can be done on other areas of the data conversion that relate to a tenant. It just depends on the implementation plan, and what the company is trying to accomplish.
Static data is data that, for the most part, doesn’t change. An example would be units in residential real estate. The tenants change, but generally speaking, the units do not. Static data can be converted at any time during the implementation process - the earlier the better, since time is at a premium closer to the go-dark time frame.
Variable data is data that is unknown until a client goes dark (‘dark’ being that brief time period where there is no system running), and the final data conversion is occurring. An example of variable data would be a tenant’s outstanding balance. This data cannot be converted until the old system has closed down.
The variable data conversion should be orchestrated and tested for a couple of cutoff periods prior to the go-dark/live process. A conversion protocol should be created that will be executed during the 1-2 days of the final variable data conversion. It’s not unusual to encounter problems during the orchestration process. In fact, that’s the point - to find and correct data extraction problems in a safe environment.
Yes, it should – particularly the data that won’t be converted to the new system. Sometimes, there will be access to the legacy system forever. If this is the case, then just let the data stay there, and access it as needed. As time goes on, there will be fewer reasons to go back to the old system.
If that is not the case (and especially if license fees are being paid for the old system), then either print the reports in hardcopy or save the files to use in an electronic library folder system – this way users can easily go back and find the information.
There is no better or worse case. The ease at which the data is pulled from the source system should dictate the decision. For some source systems, it may be easier to pull individual transactions. For others, the best solution would be to bring in a trial balance. It really depends on the capabilities of the source system - its reporting capabilities and how it stores data. Know that pulling individual transactions will result in a slower import process because of the sheer number of transactions that will be brought in. A summarized journal by month, by property, and by GL account will always go faster from an import standpoint (but the summarized entry might be more difficult to create or ‘pull’ from the source system.)
Usually, it’s one consultant. Occasionally, we employ two on larger engagements or where certain subject matter specialties are required. The reason we can keep it so low is because all the consultants at Lupine are adept and trained in classic project management techniques and on most, if not all, of the modules within certain real estate packages. As a result, they can fill a number of roles.
There are several things that we do guarantee:
The one thing that we can’t guarantee is the client’s level of work effort, which is crucial to success. Lupine and the software company can do everything correctly, but if our mutual client is not available to work on the project in the time frames necessary, we can’t guarantee that the implementation will occur at the desired go-live date.
The discovery process has two main goals. The first is to get organizational agreement on the project scope. It is rare that the entire company is in sync with what is going to be implemented and what the conversion process is going to be. From Lupine’s perspective, we can’t manage a moving target. Therefore, there are a lot of questions regarding scope. The second outcome from the discovery process is the creation of custom implementation materials for our client that will be presented during the implementation kickoff meeting.
The goal of the discovery process is not to begin making decisions regarding how the chart of accounts is going to be set up, what the unit type numbering methodology will be, or what the financial statements should look like, etc. All of that will be handled in the module design and configuration meetings.
Every week we send out a status report. For consistency purposes, we send it out on the same day of the week and about the same time. It is sent to the project team, other relevant people in the client’s organization, and relevant people at the software company. The simple and effective format looks like this:
Additionally, we use the status report as the agenda for the weekly status meeting held every week until we’ve gone live and the project team disbands. These status meetings are held by phone and are attended by Lupine personnel, software company personnel, and client personnel.
The biggest reason we see is poor resource allocation - a lack of an understanding that people have full-time jobs. In the beginning, it’s easy to believe that everything can get done and the deadline will be met. However, if clients don’t take into account that, prior to the implementation, everybody was already working 40-50 hours, burn out is a very real possibility. When we write the project work plans, we do take this into account, and budget the tasks about 3 to 4 times longer than it would take if nobody had full time job.
Other non-successful behaviors or reasons we have observed are:
This has happened twice in our 18 year history. In both instances, the failures had their foundation in the conversion of the general ledger history. Both the projects moved faster than our clients’ ability to extract, import, and validate this history. They were trying to convert too much history. At the various go-live dates, they were still several years behind and would not take the steps necessary to bring in the needed resources to get the history loaded. When bringing over general ledger history, make sure that the time/money cost of the conversion is worth the benefit of having the history in the new system.
Discovery, kick-off, module design and configuration meetings, weekly status meetings, training, and orchestration of the go-live process are all equally important, and they tend to build off of each other. However, if forced to pick one, we would say discovery - the process of getting of everybody on the same page. This serves as the foundation for communicating an implementation approach. It’s key and will resonate throughout the entire implementation process if it is done poorly.
There is a risk of the entire implementation effort being all for naught if the users aren’t able to use the system in the manner in which it was intended. The entire implementation will be deemed a failure - we’ve seen this happen. Generally, there is about a 45-day window for the user base to say either: 1) they love the software, or 2) they it hate it. Once they’ve made that emotional decision, it is tough to change their minds. Bottom line here is: don’t go cheap on the training. It is the last piece to the entire implementation effort, but if it is done poorly, everything that preceded it could possibly be time wasted.
Lupine’s methodology is to train a client’s user base in the two days between shutting down the existing systems and going live on the new software. This training will occur simultaneously with the final conversion of data. Therefore, when the users get back to their desk after the training, they are live on the new system. All of the data will have been loaded onto the new system and staff can begin working in a live environment. This reduces the amount of time when knowledge gained during the training can be forgotten.
Between one and ten. Once there are more than ten, even the most disciplined groups will break down and begin having side conversations. Obviously, they can’t listen to the trainer when these conversations occur. The trainer then has to train two to three different groups that are all having various conversations. Thus, the smaller the class size, the higher the focus of the attendees. If more than ten people need to be trained, then breaking it up into two or more sessions is recommended. It will be more expensive, but if the system users do not learn the software, there is a risk of a failed implementation.
Definitely. With software, the knowledge doesn’t translate by simply watching what is being done. Sharing doesn’t really work either, because often one person in the pair will become the teacher and the other the student - while both need to be the students. Hands must be on keyboards.
These can be terrific. Even though user guides are provided by the software companies, the fact is most people won’t use them. It’s not because they’re not well written - it’s because they’re thick and daunting. Most people just want a quick guide to show them how to do the day-to-day tasks. Part of the training implementation program should be to have these guides and/or aids created. They can either be passed out during the training session, after each training module, or at the end of the entire training process.
Ideally, a client’s own data should be used, but practically speaking, this is difficult and usually not cost effective. This is due to the fact that the database has to be setup in order to be meaningful in a training environment. Sample or training databases are usually already setup to demonstrate software functionality, whereas the client’s database won’t be.
The question is this: What is the cost/benefit of the company setting up their database for training just so the users will recognize the properties, units, and tenants? They are more familiar with the data, but does it actually help them learn the software faster or better? The answer is: probably not. You can spend the finite amount of time and resources in other areas on the implementation.
We have all been on both sides of this. You’re the advanced person in a class of ‘newbies’, and you’re bored. Or, you’re the inexperienced one, and everything is going over your head. If companies make the mistake of combining groups with varying levels of experience or interest, then they run the risk of one side being bored and the other being overwhelmed.
This is an analysis they need to do in advance, because the goal is to get everybody trained. They may believe they are saving money by putting everyone in the same class, but if only half the class is truly being trained, the goal of getting everyone trained on the new products hasn’t been accomplished. Additionally, they didn’t save any money if one group didn’t get what they need.
Final end-user training must be done in person. You have to be able see students’ reactions, and you must be able to monitor the room to see what’s going on. They must be able to see your face as you’re teaching the product. Even with new technology there are some things that still must be done face-to-face. Remote training can be used to work through particular software or user issues, but it’s not a full end-user training. Don’t try to save money by not having the instructor in front of the people who need to be trained.
This does not work. Period. The people on the phone will be checking email and working on other tasks. No training will have occurred. There may be a box checked off by their name, but they will not actually have the required software knowledge. Resist the temptation to save money on training. If corners need to be cut, do it elsewhere. Invest as much money as needed here – have more sessions, not less. Have smaller class sizes, not bigger. Be smart about this.
Here are some training rules we regularly use at Lupine while conducting training:
Most training sessions break down due to a lack of discipline. One way to ensure that everybody maintains focus is to tell the users that they will be tested at the end of the training. Then test them.
A parking lot is a tool used to temporarily park a question that, while important or relevant, is not so at that particular moment. The valid comment is saved and addressed later. Typically the parking lot is a white board with somebody assigned to writing the item down. This way, everyone sees that their comment will be captured and valued. As a result, attendees don’t mind that their questions are not being answered, because they know it is in the parking lot. At the appropriate time, all issues/questions that have been ‘parked’ will be addressed and discussed by the entire group - leading to much more disciplined sessions.
First of all, what is ‘train the trainer’? It’s when a software trainer trains certain people within an organization to give them the software product knowledge, who then go on to train the remainder of the organization. This is in opposition to trainers who directly train everyone within the organization. In the train-the-trainer model, the internal trainers are a kind of middle man in the process. They obtain the software knowledge and go from there. This approach can work very well if you:
Otherwise, companies are probably better off hiring out the end-user training process.
Ideally, a training location should be used. Trainees will be able to focus more as there will be no phone calls, emails, or bosses to disrupt the flow of the session. Renting a room may cost more, but it’s worth it, considering the cost of software failure.
Yes. Do not accept a canned approach. Be proactive - create a first draft agenda and send it to the assigned trainer. If a back-and-forth does not occur, clients will get a canned agenda that almost assuredly will not suit their needs. The trainer won’t know what their hot spots or weaknesses are. There is no way for them to know. Don’t abdicate this responsibility.
They don’t have to be. But, if they haven’t been, they will need to be told how the software modules were configured and other relevant items that only those on the project team would know. The training could have bad outcomes if the trainer and the project team are not in sync. This doesn’t have to be a big deal, but a coordination meeting should be held to discuss the status of the implementation, the training dates, the training agenda, needs and nuances of the people being trained, and items that should not be included in the training.
Yes. This way they can see how the system has been configured and can configure their training database the same way. This way, the functionality the users are trained on will be the same as what they will see when they begin using the new live system. Don’t skip this step. It is very important for the trainer to know a company’s configuration. If they don’t care to see it, get another trainer, because they will probably be using a canned approach.