The Three Levels of Tableau Support

Hi all,

Let’s talk a bit more about how to build a top Tableau support team. This post focuses on the support my team provides to our user base. At the moment we have just over 1000 Tableau Desktop users, and approximately 8000 active users on the server every month – that’s a lot of demand for our services.

Now users can be a pretty demanding bunch, with myriad questions, queries and problems. And we are busy. So how do we ensure that users get the level of support they need? Well we provide 3 levels of main support, with the objective being to ensure that the type of user query / issue is directed to a channel that gives it the appropriate level of attention. This ensures an efficient use of my team’s valuable time, and critically it cuts down the traffic to our email inbox which is always a good thing.

screen-shot-2016-11-22-at-14-45-32

Some of the support options for our users

Level 1 – Man down!

Red alert! Something is busted and it needs to be looked at now! For this we need any incident to be logged in a trouble ticket system, with appropriate priority and detail. We use Service Now for this (many other tools available).

So if users think Tableau is broken or they need some immediate help then they log a ticket. This is mandatory. We need to track and log the progress, and the data is audited regularly. No ticket, no fix. We obviously don’t wait stubbornly for the ticket though, if there’s a big issue we investigate while the incident is logged.

Once the ticket is logged it flows through our regular support flow. First my Level 1 team will take a look and see if there’s an easy fix. If they can’t fix it then it’s an escalation to my more skilled Level 2 team, and then a potential escalation to my main Level 3 team for the trickier issues. There may be a future post coming about effective incident management, so I won’t go into detail here.

Some users don’t like us mandating that they raise an incident ticket. But it’s the only way to ensure traceability of problems.

Level 2 – It can wait

Sometimes users have problems or requests for assistance that are not so time sensitive. Maybe a development dashboard has broken, or someone needs help from the team to perfect that Pareto chart, or hey – maybe they just wanna talk about how much they love Tableau (it happens!)?

screen-shot-2016-11-22-at-14-48-33

Book your appointment with a Tableau Dr.

That’s where a Tableau Dr. Session is needed. We dedicate 3 half days a week to Tableau Dr. Sessions. Users log onto our community page and can book their session from a list of available slots. If the next slot is in a couple of days then they have to wait to be seen. Providing this structure to the sessions is critical as it allows my team to keep control. Before we implemented the structured sessions we were getting peppered with do-it-now requests for Dr. Sessions. That meant my team was context switching all over the place and other projects were being impacted.

Providing structure also makes users understand this is a finite resource and thus they are more appreciative of this dedicated time with my Tableau experts.

 

Level 3 – Let’s talk about Tableau, baby

Next level of support is for general chat. Could be a question about functionality, or a point about performance, or a geeky joke, or someone just wanting to ask a question about our upgrade strategy – could be anything really.

 

That’s where our Lync Group Chat comes in. We’ve generally got a couple of hundred users on the chat channel at any one time so it’s a decent forum for such questions and banter. It’s great for my support team to see a question get asked, and then before we have a chance to pick it up, another user has provided the answer – a self healing community – IT support nirvana!

screen-shot-2016-11-22-at-14-45-08

Wanna chat Tableau? Use our Group Chat

 

What’s in it for me?

These support options ensure that each query gets an appropriate response. If it has all hit the fan, then we act quickly. If it needs more care and detail, then we book that time, and if it just needs someone to talk to then we’ve got a community of people ready to give that data hug. It also means we get hardly any emails. And email is a dreadful means of logging an issue, as there’s no traceability or feedback. Users only get annoyed when they feel a query is being ignored, and ensuring the correct channel for a query means users get feedback as appropriate and aren’t left wondering where that email question went to.

Also my support team can plan their work and aren’t constantly context switching, one of the biggest enemies of productivity.

So that’s it. Pretty simple to implement but mightily effective. As always, ping me if you want a more detailed run-through.

Happy vizzing, Paul

Posted in Tableau as an IT service | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments

Tableau on Tour Keynote Speakers – Some Suggestions

This gallery contains 32 photos.

Hi all, I love the Tableau Conference. But I also have a lot of fun at the smaller “Tableau On Tour” events. In particular I love the keynote speeches. We’ve had some crackers recently, with particular recent favourites being Tim … Continue reading

Gallery | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Empowering Your Tableau Users With Makeovers & Proactive Support

Hi all,

More on building that dream Tableau Centre of Excellence function. I’ve previously posted about how to structure your support team and ways to build user engagement with “Tableau Champions”, this post focuses on how you can use Tableau’s introspection capabilities to deliver a more proactive support function.

What is proactivity?

The traditional definition of proactive is as follows.  To me it means means seeing into the future and Screen Shot 2016-08-07 at 21.21.00getting to an issue before it even happens. In the world of IT Support, proactivity really is the Holy Grail, meaning the difference between a good support function and an amazing one. But it’s super-hard to achieve, especially in the complex enterprise level setups that have multiple break points. You can almost never prevent something from breaking, no matter how good your monitoring is.

What you can do is add some proactivity into the way your team operates by identifying when your users are not getting the best from your service. In Tableau Server world we have the ability to spot the following and much more.

  • Slow Tableau visualisations
  • Consistently failing extracts
  • Stale content

I won’t go into how to achieve this, it’s the subject of a future post. But I’ll point you in the direction of these 2 posts that should get you on the way. Go check out Custom Admin Views by Mark Jackson and Why are my Extracts Failing – by Matt Francis.

I get my team to scan our admin views, to identify those users that in our opinion are not getting the best experience they can from Tableau. If we see someone who might be experiencing consistently slow visualisations, or have regularly failing extracts then we give them a call. Often the users won’t even have a complaint. But our message is “We think you’re not getting the best experience possible, and we want to make that happen”.

screen-shot-2016-11-17-at-21-17-13The initial reaction is often surprise. “I’m ok, I didn’t raise an issue” will be a common response. But then once we’ve worked with the user, and improved their experience, you’ll find they are blown away. You may even get a call from their management!

You’ll find this kind of service is very rare in most organisations so if you can deliver it, even sporadically, then you’ll be regarded very highly.

Makeovers

This is pretty simple, if a little time-consuming. Browse the Tableau content on your server. Spot something that doesn’t look great – it might be slow, not compliant with your best practices, or just fugly. Download that content, and give it a makeover. Make it look great, maybe add some improved functionality, make it nail best practices.

This is one of my team’s favourite activities due to the reaction of the user / client. They LOVE it. It really creates a sense of engagement, the user feels that your team actually really cares about them. We’ve also had our Tableau Champions participate in Makeovers which is even better as it saves my team some cycles.

Be careful though, some user content might be confidential and the user may not appreciate an admin poking around in their data. Also, remember that by doing this you are implying a criticism of their work, so handle the communication with care and sensitivity.

Also ensure that you don’t just change stuff and then drop it back on their laps. In a self-serve model like mine users develop and support their own content so it is crucial the user knows what you’ve changed, how you’ve changed it and what benefits you feel the modification brings. Pull them and their manager into a call, run through what you’ve done and then hand it back over to them to run with it.

These have been very successful in my organisation. Users truly appreciate the help and my team has fun doing it.

So there you are, a couple of tips for adding that gloss to your Tableau support service.

Cheers, Paul

Posted in Tableau as an IT service, Top Tips | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Building user engagement with Tableau Champions

Hi all,

More on building an enterprise Tableau Centre of Excellence. That’s pretty much all I know about hence why I seem to be writing about it a lot…

This is a short post about an initiative that is proving to be pretty successful at my organisation, we call it Tableau Champions.

images

We are the Champions!

We’ve based this loosely on Tableau’s own Zen Master initiative. For those that don’t know, Zen Master is effectively a title awarded to members of the community on a yearly basis. For more information see here – http://www.tableau.com/ZenMasters

 

What makes a Tableau Champion?

We award the Champions badge to users that demonstrate

  • Passion & enthusiasm for Tableau & data visualisation
  • Support of the Agile BI service at my organisation
  • Skils in Tableau & visual analytics
  • Willingness to share & assist other Tableau users
  • Involvement in the Agile BI community

Even amongst a huge user base like I have, it is easy to spot users that demonstrate these characteristics. They will become your trusted advisors, providing great feedback and helping you iron out the bumps in your service.

 

What’s in it for a Champion?

Here’s what my team does to help Champions

  • Build Tableau skills & contacts
  • Increase internal profile across the org & gain stature as a Tableau SME
  • Increase external profile
  • Exposure to extra product information & roadmaps
  • Contribution to the development of the Agile BI service
  • Great collaboration opportunities across the firm

 

 What’s in it for my service?

And in return Champions help us by

  • Makeovers & dissemination of Best Practices
  • Publicising events & webinars
  • Blogging on Agile BI community site
  • Host local user groups
  • Champions help local users evolve Tableau skills
  • Driving better understanding of visual analytics & Tableau

 

So it’s a mutually beneficial scheme, with Champions effectively acting as an extension on my own team. Win and indeed – win.

One thing I noticed was the way the Champions initiative immediately started to raise the bar in terms of user interaction with Tableau at my org. No sooner had I posted the first blog announcing our initial Champions, then I had multiple emails from other users saying “I want to be a Champion”, “What do I need to do to get this recognition?”. I could even tell that some users were a little miffed not to have been selected. I then saw these users upping their game, posting more, interacting more, trying to be noticed. We’ve seen this with the Zen Master scheme eliciting exactly this kind of response from the external community.

So there you have it. We love to empower our users. And we love to reward those users that have become hooked on Tableau like we have.

Cheers, Paul

Posted in Top Tips | Tagged , , , , , , , , , , , , , , | 1 Comment

Tableau Server – all about the… Backgrounder

Hello everyone.

You may know me as a Tableau Centre of Excellence manager. That can involve a lot of paperclip pushing skills, with the real work being done by my excellent team (thx @jakesviz & The Information Lab). But I do try and get down and dirty with my lovely Tableau Server environment to keep my skills fresh. Obviously I don’t mess with it – @jakesviz gets pretty protective about his Server.

This series of posts is my attempt to shed some light on the internals of Server. Note there are many more experts in this field than me (Craig Bloodworth, Mark Jackson, Jen Vaughan, Tamas Foldi, Mike Roberts, Angie Greenhaw – to name but a few) so please do comment if anything is incorrect here. Maybe you guys could help me evolve this post?

What is the backgrounder?

The backgrounder is a process that runs as part of the Tableau Server application. As the name suggests, it handles background tasks such as refreshing extracts, running subscriptions and also processes tasks initiated from tabcmd.

Here are the backgrounder processes. The .exe file and the .war file. The WAR file is a Web Application Archive, and contains all the necessary components and resources needed for a Web Application such as Backgrounder.

On a clustered environment you’ll find these files in D:\Program Files\Tableau\Tableau Server\worker.1\bin (may vary slightly with your installation).

28-01-2016 14-07-52

Other files related to the backgrounder.

Template (.templ) files – These files TBD

28-01-2016 14-13-09

There are also a few .rb files in D:\Program Files\Tableau\Tableau Server\worker.1\tabmigrate\db\migrate.

And we also have a .properties file which contains all the config entries relevant to the backgrounder. It also has almost all of the other stuff that you’d find in the main workgroup.yml file which is odd. I’d  have expected it to be just the backgrounder config.

backprops

Here is the location of the backgrounder log files

backlogs

Here are 2 instances of backgrounder.exe running on my server (from Task Manager).

28-01-2016 14-01-08

 

Can I mess with it?

Backgrounder can be configured. There are several settings present in both workgroup.yml and backgrounder.properties. Workgroup.yml is the master config file, and it populates the backgrounder.properties (and other .properties files) when a ‘tabadmin configure’ is run.

I don’t know what all of these do (yet) and the only one I’ve ever edited is ‘backgrounder.extra_timeout_in_seconds‘ which sets the max time in seconds that a backgrounder session can run for. Tableau kills off the session if this threshold is reached. Useful for forcing users to optimise their extract times!

I also pay attention to the ‘backgrounder.vmopts‘ parameter, as this defines the size of the java heap space for this component. All components have a vmopts setting and I’ve had to increase them on occasion due to out of memory problems.

You may also want to change the ‘backgrounder.log.level‘ if you need more debug info, although Tableau logs are chatty enough for me.

If there’s a golden parameter in this lot that you get value from then let me know in the comments.

backgrounder.deploy.dir: D:/Program Files/Tableau/Tableau Server/data/tabsvc/backgrounder
backgrounder.external_cache.concurrency_limit: 10
backgrounder.external_cache.enabled: true
backgrounder.external_cache.num_connections: 1
backgrounder.external_native_query_cache_disable: true
backgrounder.extra_timeout_in_seconds: 1800
backgrounder.failure_threshold_for_run_prevention: -1
backgrounder.jdbc.wg.connections: 8
backgrounder.jdbc.wg.idle_connections: 4
backgrounder.log.dir: D:/Program Files/Tableau/Tableau Server/data/tabsvc/logs/backgrounder
backgrounder.log.level: info
backgrounder.native_api.log.level: info
backgrounder.out_of_date_schedule_minutes: 240
backgrounder.ping_dataengine.millis_to_wait: 5000
backgrounder.ping_dataengine.num_retries: 24
backgrounder.ping_services.num_retries: 10000000
backgrounder.ping_services.time_to_wait: 5000
backgrounder.purge_directories.directories: ""
backgrounder.querylimit: 7200
backgrounder.restart_interval_in_minutes: 480
backgrounder.restrict_serial_collections_to_site_level: true
backgrounder.search_index_verification.enabled: true
backgrounder.sheet_image_api.max_age: 240
backgrounder.sleep_interval: 10
backgrounder.sort_jobs_by_run_time_history_observable_hours: -1
backgrounder.sort_jobs_by_type_schedule_boundary_heuristics_milliSeconds: -1
backgrounder.timeout_tasks: refresh_extracts, increment_extracts, subscription_notify, single_subscription_notify
backgrounder.tomcat.threads: 4
backgrounder.urlprefix: backgrounder
backgrounder.vmopts: -XX:+UseConcMarkSweepGC -Xmx512m

Note that Tableau Support don’t like you to edit config files manually, they recommend that you use the tabadmin set commands to change any parameters. They might have to change that recommendation when we see Tableau Server on Linux.

For more about .templ, Ruby & properties files check out this from Tamas Foldi.

 

What are the problems with the backgrounder?

Here are some of the things that can be problematic with the backgrounder.

  • Single Threaded – This means the backgrounder process can only run one thread at a time, a thread being a set of executable instructions that a process can perform. The upshot of this is that your backgrounder works through a queue of tasks one-by-one.
  • Latency – Due to the single threaded nature of backgrounder, you may see delays or ‘latency’. For example, if you have one backgrounder, and 2 tasks for it to perform at 2am, then task 2 will have to wait until task 1 has finished. If task 1 takes an hour then task 2 won’t start until 3am. This can be annoying for users that expect their data to be refreshed by a certain time.
  • Resource Intensive – The backgrounder can consume a significant amount of processing power (CPU) and input / output (I/O) on your server. This is dependent on the type of task it is performing. It’s not uncommon to see a backgrounder node consuming 100% CPU.
  • Other Stuff – The backgrounder process also does other stuff on Tableau Server that isn’t concerned with extract refreshes. For example – reaping extracts, checking disk space, synching Active Directory groups, rebuilding the search index etc. Bear that in mind when building your system. In reality these tasks don’t take up too much resource but they do take some and you should be aware.

 

Isolating the backgrounders

A common configuration in a clustered environment is to dedicate one of your worker nodes to the backgrounder processes. This means you can dial up the number of backgrounders and let them do their stuff without worrying about any impact on other processes. This is one of the most common performance recommendations from Tableau support.

You can also get a lot of info out of the Postgres DB relating to backgrounder usage and performance. Nelson Davis has posted a guide to getting started here.

 

Improvements I’d like to see

Ok so here are a bunch of improvements I’d like to see to the topic of backgrounders and extract management. I know some of this will drop in upcoming releases and some of these problems have been solved with custom solutions at some customer sites.

Alerting – Tableau Server doesn’t alert (email / IM / SMS) when a task fails. This means you’ll need to set up external monitoring to detect issues. I know Tableau are on this though so expect to see it in an upcoming version. Some people in the community have also coded their own solutions to this problem but it really should be native functionality.

Control per site – We segregate our dev / test / prod user environments using sites , all on the same server. We run 8 backgrounder processes on that server, which are shared across the tasks on all sites. As an administrator I’d really like to be able to bind backgrounder processes to specific sites. For example, 2 backgrounders on each of dev & test sites, then the other 4 dedicated to the production site. That would ensure production tasks always have enough resource to be able to execute on time.

Control per process – I’d like to be able to stop / pause / mess with individual backgrounder processes easily. It is possible – see this from Toby Erkson, but it would be good to have this as part of an administrator console or something.

Control per type / size / pattern of extracts – It would be good if I could dedicate specific backgrounder processes to particular extracts based on their characteristics. In particular I’d like to allocate one backgrounder to all the extracts that take less than 1 minute to complete. Or even use this to reward users that show diligence with their extract management by dynamically prioritizing incremental refreshes or extracts that have a low failure rate.

Better metrics – I’d like to see exactly how much CPU is taken up by a particular backgrounder process or task / schedule or per project. This would be useful for chargeback.

Dynamic reprioritisation – I love all my users. But in particular the ones that take good care of their extract refreshes. I’d like Tableau to be able to dynamically increase the priority of tasks that complete quickly, are incremental and that have a low failure rate. The message being, if you want your stuff to get the best slice of available resource then help us out with best practice.65

Disable run now – We’ve had some issues with the “run now” option that allows users to kick off an extract refresh on-demand using the UI. In particular we’ve seen some trigger-happy users bring our server down by hammering on the run now option multiple times. I’d like to disable that or maybe throttle it somehow.

Better guidelines from Tableau – The documentation from Tableau isn’t great in this area.

05-05-2016 13-09-11

I have an 8 core 128GB server and  run 8 backgrounders with no capacity issues. And I know of other organisations running way more than that. According to this doc I should be running between 2-4. That would be some serious under-utilization of the server. I’d like to see some clearer recommendations, maybe taking into account the variety of use cases that I’ve seen in other big enterprise deployments.

 

OK that’s all for now. This post could have been more detailed but I figure that I’ll get some valuable inputs from the community that will help me expand it. Actually in the time I was writing this, Mike Roberts was doing the same – check his post for some excellent info.

Cheers, Paul

 

Posted in How to..., Tableau as an IT service | Tagged , , , , , , , , , , , , , , , , , , , | 8 Comments

The Svalbard Global Seed Vault

makingof

Hi all,

OK here we go. Iron Viz competition time. My first viz in a long time, so it’s good to get back using Desktop again. The first competition this year is the Food Viz contest!

1. The Idea

So this one’s all about food. Plenty of potential ideas here but I love to deviate from the norm and go a little bit off the wall, a little bit unusual.

I got thinking about food. But then I thought what would we do if there was NO food? If we had nothing to grow. If all the crops in the world failed overnight. What would we do? That would be a pretty bad situation for sure and someone must have a backup plan. I’m in IT as you might know so I do love a good backup plan.

And it turns out there is one. The Svarlbad Global Seed Vault. Buried 130m into the Norwegian permafrost, this building looks more like a Bond villain’s hideout than a critical storage facility. Once I saw this website my mind started racing with questions and that’s a good sign that you’ve got a decent subject for a viz.

 

Screen Shot 2016-04-19 at 08.36.51

The Svalbard Global Seed Vault

Go take a look at the viz!

 

2. Data

I got the data from 3 main sources.

The main seed stocks data

Plenty of detail in the data which gives some good potential for analysis. The main seed stats xls was pretty tricky to work with. There were a lot of nulls and gaps which I had to exclude from the dataset, and the file was pretty untidy. There were also close to a million rows in the file and that meant my pc struggled at times. All of this made manipulating the data tricker than I would have liked.

 

3. Viz Design

As with last year’s entry I thought I’d use Story Points again. This format has limitations but I think it works well for visualisations that answer multiple questions. In terms of formatting, I’ll be honest. I just didn’t have the time to mess about so I pretty much went with the same style that I used for my Evolution of the Speed Record viz last year.

Screen Shot 2016-04-18 at 22.35.49

Construction stats

I also thought I’d use a lot of images with this viz. The seed vault is an impressive construction and had a load of really good quality images available for use. I found it was useful to use a text box to provide additional commentary on each slide.

 

 

 

Screen Shot 2016-04-18 at 22.39.29

Seed vault funding

Most of the information about the seed vault made a big deal about how this was a big global project. This led me to question who was contributing and supporting the project and who was pretending to? I was pretty sure there would be a big difference in contributions, both in terms of stock and also finance.

 

 

Screen Shot 2016-04-18 at 22.40.21

Embedded Wikipedia page

A technique I learned last year was embedding a contextual Wikipedia page into the viz. This provides more detail for anyone wanting to know more about the data points.  A good tip is to append “?printable=yes” to the URL to display a more cut down page, as well as using the mobile URL (thanks to David Pires for that tip). Some of the links didn’t work as there wasn’t a direct Wiki page – no big deal.

 

So there you go. An interesting story for sure and one that was pretty enjoyable to put together.

 

4. Challenges

This was my first viz in a while. I’ve spent the last year knee-deep in Tableau Server and have a crazy busy job building a Tableau Centre of Excellence, supporting thousands of demanding users.

So my biggest challenge wasn’t data, or thinking of a subject, it was my own lack of ability with Tableau Desktop. I was shocked at how rusty I’d become and even some basic tasks took way longer than they should have. On the plus side it was great to be back on the vizzing horse again! I’m now inspired to get stuck into some of the online training and boost my skills.

Another challenge was actually deciding to have a go. The standards in the Tableau Community have gone through the roof in the last year, and the level of quality out there is absolutely amazing. So for the first time ever I was nervous about even getting my entry out there.

 

5. Analysis & Story

So what can we take from this story? Here are some of the key observations that Tableau has allowed me to glean from the dataset.

  • The Svalbard Global Seed Vault was a decent build. Didn’t cost too much and also only took 20 months. Pretty impressive going.
  • Some unusual crops stored in the seed vault. Rice at the top, and mostly concentrated around the Triticeae tribe of crop – wheat, maize etc. Surprisingly few fruit. I like blueberries so I’d be stuffed without them for my doomsday breakfast.
  • Probably not a surprise to see India top the seed donations chart but it was curious to see several African nations amongst the top donators.
  • I was surprised to see seed donation amounts tailing off big time in recent years. I wonder if that’s down to project apathy or maybe we’ve just got all the samples we need for now?

Wanna know even more? Go check out this Interactive 360 tool.

So that’s it. I hope you enjoy the visualisation. If you do then please consider voting for me in the IronViz competition.

Regards, Paul

Posted in How to..., Top Tips | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

How To Set Up Your Tableau Server Environments

Hi,

Guess what this post is about – yes TABLE CALCULATIONS…. haha. No chance. Talk to Jonathan Drummey about those. This is of course yet more info that I hope will help you guys set up a dream Enterprise Tableau deployment.

Today we are gonna talk about Environments – i.e. what Tableau environments should you create in your organisation to give your team the best chance of success and keep your lovely users happy?

As always, I’m not saying this is THE way to do it. There are tons of great setups out there. I’ll just tell you what we have. Feel free to suggest better methods in the comments.

 

Environments for your users

This section is concerned with environments that you will provide for your Tableau users to do their work. Typically this will follow the standard Information Technology Infrastructure Library (ITIL) environment definitions, but there are a few things you can do to add extra options for your users.

These are the environments our users have at their disposal:

  • Production – The main business & user facing environment. Content published here is authoritative, follows best practice (hopefully) and is actively supported.
  • Testing – aka UAT. Generally used for final testing of uploaded content
  • Development – The environment where content is first shared as part of the development process.
  • Scratch – An extra environment for content that doesn’t need environment management. E.g. User wants to temporarily share content with a couple of colleagues.

Providing these environments gives users crucial options and flexibility. Your Tableau service will most likely serve many different business areas and teams, each with different practices for content development and release management. Some teams will rigorously follow Systems Development Lifecycle (SDLC) processes, creating content in development, promoting to User Acceptance Testing (UAT) and then eventually to Production. Other teams are totally happy to change content directly in Production, as and when they feel like it.

Crucially we don’t mandate what our users do, it’s a self-service model and so long as they follow their own due-diligence and governance procedures then that’s cool with me. The important thing is that we give them options to work with Tableau in the way that they want. If they break anything then they know it’s down to them.

The scratch environment is an interesting concept. It started with good intentions but realistically not many people are using it. So it looks like we might bin that.

Note that we use Tableau sites to segregate our environments.

 

Environments for your team

This is different from the above user-facing environments. These are the environments that your team uses for the service you provide. Obviously all this costs money in terms of hardware procurement and usage, depending on the spec you choose.

  • Production – Main environment that serves your users. In our environment this also includes the UAT, Development & Scratch sites for users – but we class it all as production. That might seem odd, but remember that many teams will be development teams, and to them the development site / area is their equivalent of production. So if the development site is down then they can’t work.
  • Disaster Recovery (DR) – For use in the event of a Production outage that can’t be easily restored. Exact same spec as Production. Totally identical, so that config can be restored and this server can be used as Production. You’ll need to make sure this environment gets the same upgrades as your Production environment.
  • UAT – This is UAT for my team. If we want to make a change to Production, it gets final testing here. This environment is also the exact same spec as Production to ensure an accurate test. If it fails here then it’s likely to fail in Production as well. We use UAT for testing maintenance releases, config changes and other potentially disruptive non-Tableau related changes to the server. Additionally, we make this environment available to users for a couple of weeks UAT prior to releasing new versions to production.
  • Engineering – Lower spec than prod & UAT. For testing the latest available release from Tableau. That is likely to be a higher version than production. Is useful for spotting bugs in new versions or confirming that bug-fixes work.
  • Beta Test – We are proud to be part of Tableau’s pre-release testing audience. We use this server to test releases in the Beta programme. Lower spec than engineering. To the point that the server only just meets the minimum requirements.
  • Alpha Test – We use this to test the alpha releases or any extra work we may be doing with developers at Tableau. We love to be involved in the genesis of new functionality.

So that’s what we are lucky enough to have. It’s not perfect but it allows us to give our users a ton of flexibility in how they use Tableau, and also my own team always has a place to test new releases, plan upgrades and help Tableau with their pre-release programmes.

Interested to see what the community has in terms of environments. Let me know in the comments. Remember there are a load of other posts on this blog about Enterprise Tableau considerations.

Cheers, Paul

Posted in Tableau as an IT service | Tagged , , , , , , , , , , , , , , , , , , , , , , | 3 Comments