The ability to restore a SQL Server database over and over can sometimes be handy in a test scenario, for instance when you need to test a schema-update for autodeployment through DbUp/Octopus – or you otherwise need to have your database in a known state prior to testing.
However this is quite often blocked because the database is in use, resulting in the above message.
My solution to this is to script the restore to run in single user mode like so:
- USE [master]
- ALTER DATABASE [BenchmarkDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
- RESTORE DATABASE [BenchmarkDB] FROM DISK = N'D:\BenchmarkDB.bak' WITH FILE = 1, MOVE N'BenchmarkDB_log' TO N'D:\Data\BenchmarkDB.ldf', NOUNLOAD, REPLACE,STATS = 5
- ALTER DATABASE [BenchmarkDB] SET MULTI_USER WITH ROLLBACK IMMEDIATE
DbUp is a nice open source project that allows you to deploy Sql-scripts via a .Net executable, thus enabling deployment via a pipeline like Octopus Deploy. DbUp tracks which scripts have already been run on your database so you don’t have to worry about double-updates to your database.
The basic problems with idempotency and databases remain of course, but that is a completely different story and you still have to manage that.
DbUp has a nice fluent interface, but doing programmatic configuration unfortunately rather breaks that. It took me a while to figure out so here is how I managed to change the excution timeout in my DbUp deployment C# program for a long running script.
First use DeploymentChanges.To.SqlDatabase(connectionstring).WithScriptsEmbeddedInAssembly( … ) to create a variable which is an UpgradeEngineBuilder.
Then invoke the Configuration callback action on the UpgradeEngineBuilder like so:
c.ScriptExecutor.ExecutionTimeoutSeconds = 30 * 60; // 30 minutes in seconds
Console.WriteLine("Configure ExecutionTimeoutSeconds to " + c.ScriptExecutor.ExecutionTimeoutSeconds);
After that you can use the fluent interface again to invoke the Sql-script:
var upgradeEngine = upgradeEngineBuilder.LogToConsole().Build();
var result = upgradeEngine.PerformUpgrade();
This is a very annoying thing to encounter when connecting to a Remote Desktop on a machine in the network. That green bar just goes round and round for ages! I don’t know why this happens – in my case it is a machine in another domain that I log in to:
Let us say that this machine resides on IP address 10.20.30.40, then the following Powershell snippet will fix your woes:
- Set-ItemProperty -Path 'HKCU:\Software\Microsoft\Terminal Server Client\Servers\10.20.30.40' -Name UsernameHint -Value ''
- mstsc.exe /v:10.20.30.40
There are other ways to go about this, such as the REG command in a batch-file, but these days doing .bat files is just soooo old school :)
If you have SQL Server database project in Visual Studio that needs to reference another database, ie a View containing a 3 level reference like [OEDB].[dbo].[Inventory], you need to reference the other database by right-clicking “References” and choosing “Add Database Reference”.
This brings up this screen:
Near the bottom is something called “Database variable”. If this is filled (which is the default) all references to that database in the imported SQL must go via the variable, as can be seen from the example just below [$(oedb)].[Schema1].[Table1] the “Database variable” field.
If you fail to correct occurrences in your SQL from [DatabaseName] to [$(DatabaseName)] you will probably experience an error list with hundreds of SQL71561 errors like this:
SQL71561: View: [dbo].[qryCreateYMD] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [OEDB].[dbo].[Inventory].[Id], [OEDB].[dbo].[Inventory].[OEDB]::[dbo].[Inventory].[Id] or [OEDB].[dbo].[Inventory].[OEDB]::[dbo].[Inventory].[Id]
In my mind this wasn’t really the expected behavior, but what you can do to just use the references you have without changing your views, is to simply clear out the “Database variable” field:
As can be seen from the Example usage, references wil now work as expected…
This is basically what is explained here (when I finally found the explanation), but I wanted to add the screenshots of how this works.
Setting up a new development box for myself I had forgotten all about the necessity to use the aspnet_regiis.exe –i command. The information is fairly easily available, but with this post I’d like to cut away the fat and stick to a very easy how-to.
On my Windows 8.1 aspnet_regiis resulted in this message:
Microsoft (R) ASP.NET RegIIS version 4.0.30319.33440
Administration utility to install and uninstall ASP.NET on the local machine.
Copyright (C) Microsoft Corporation. All rights reserved.
Start installing ASP.NET (4.0.30319.33440).
This option is not supported on this version of the operating system. Administrators should instead install/uninstall ASP.NET 4.5 with IIS8 using the "Turn Windows Features On/Off" dialog, the Server Manager management tool, or the dism.exe command line tool. For more details please see http://go.microsoft.com/fwlink/?LinkID=216771.
Finished installing ASP.NET (4.0.30319.33440).
Googling around a bit more turned up this link that explains how to go to the Windows Features (via appwiz.cpl) to activate .Net 4.5 on IIS:
Or to run this from an administrative prompt:
dism /online /enable-feature /all /featurename:IIS-ASPNET45
Please note the /all parameter which is not mentioned in the KB article, but necessary to automatically include all the necessary components.
In order to get my webservices running I have spent more hours on struggling with the IIS beast. It appears that (according to this) you have to have to switch on yet another feature:
Or, if you want to automate all this, more dism commands:
dism /online /enable-Feature /all /FeatureName:WCF-HTTP-Activation
dism /online /enable-Feature /all /FeatureName:WCF-HTTP-Activation45
I have occasionally found it convenient to execute a command against all the databases on a server, such as ensuring that all the databases on my test-server have RECOVERY MODE set to SIMPLE, and performing a SHRINK operation to reclaim space on the disk for each of them. Googling this problem there were quite a lot of samples like this, but here is what worked for me:
DECLARE @dbname char(50)
DECLARE c1 CURSOR READ_ONLY FOR
WHERE name NOT IN ('master', 'tempdb', 'model', 'msdb')
FETCH NEXT FROM c1 INTO @dbname
WHILE @@FETCH_STATUS = 0
PRINT 'Opdaterer ' + @dbname
EXEC('ALTER DATABASE ' + @dbname + 'SET RECOVERY SIMPLE;');
EXEC('DBCC SHRINKDATABASE(N'''+ @dbname + ''');');
FETCH NEXT FROM c1 INTO @dbname;
CLOSE c1 DEALLOCATE c1
This is of course very easy to modify for other purposes. Some posts warned against bad performance on cursors, but this isn’t really an issue in this scenario.
I have a newly installed Sql Server 2012 set up for testing a Self Service BI concept. This means that the server is running a SQL relational instance and an Analysis Server Tabular instance. Today I added a new Analysis Server instance for running our old OLAP cube for good measure (literally) as well.
To my surprise trying to process the OLAP cube resulted in the above error. Apparently OLAP-mode is still based on the old native ADO provider from 2008R2 – but this doesn’t seem to get installed automatically…
The article here by André Van de Graaf has some information on handling this error in Excel/PowerPivot 2010, and on the Microsoft Office Project Support Weblog a guy named Martin Smith had an article – the link is now dead, so I used Google cache – the title of which was “New SQL Server–and some different cube building errors”, that also discusses the problem.
The solution, as far as I could work out, was to download and install the 2008R2 provider – the link to the the x64 version is here courtesy of the above mentioned posts.
It’s hard to tell if this is a general problem with SQL Server Analysis Services 2012 or just my particular configuration. It seemed odd that so little information was google’able, but in any case installing the old provider is a way to circumvent the issue that works for me so it might be for you too.