Archive

Archive for the ‘SQL Server’ Category

Table-Valued Parameters: Reuse model collection types

July 31, 2015 Leave a comment

This is the last post of the Table-Valued Parameters (TVP) series. In the previous post we saw we can use TVP with CodeFluent Entities. We started with a single column TVP, then we create a more complex table type. The last thing is that CodeFluent Entities can generate a TVP per entity. Let’s see what does this means:

Database:

CREATE TYPE [dbo].[CustomerType] AS TABLE (
 [Customer_Id] [int] NOT NULL,
 [Customer_FirstName] [nvarchar] (256) NULL,
 [Customer_LastName] [nvarchar] (256) NULL,
 [_rowVersion] [binary] (8) NULL)
GO

CREATE PROCEDURE [dbo].[Category_ProcessCategoriesWithCollection]
(
 @categories [dbo].[CategoryType] READONLY
)
AS
SET NOCOUNT ON
DECLARE @_c_categories int; SELECT @_c_categories= COUNT(*) FROM @categories
SELECT * FROM Category
INNER JOIN @categories AS c
ON [Category].[Category_Name] LIKE c.Category_Name + '%'
RETURN
GO

Business Object Model:

public static void ProcessCategoriesWithCollection(
    Samples.TVP.CategoryCollection categories
)
{
    CodeFluent.Runtime.CodeFluentPersistence persistence =
      CodeFluentContext.Get(
        Samples.TVP.Constants.Samples_TVPStoreName
      ).Persistence;
    persistence.CreateStoredProcedureCommand(
      null,
      "Category",
      "ProcessCategoriesWithCollection");
    persistence.AddArrayParameterObject("@categories", categories);
    ...
}

So we have a strongly typed code generated the model. How do we generate this code?

First we have to set the table type name (name used to create the database type)


Then we indicate the BOM producer to use user defined type (UDT):


Yep, only two simple attributes to set and CodeFluent Entities generates lots of code for you J

Happy Querying,

The R&D Team.

Table-Valued Parameters: CFQL operators

July 10, 2015 Leave a comment

CodeFluent Query Language (CFQL) allows to quickly create simple methods. CFQL provides support for common operations with Table Value Parameters.

To show result of queries we will use the following data:

IN

 

Using the IN operator you can exclude values that aren’t included in a list. For instance:

load(guid[] ids) WHERE Id IN (@ids)

The generated SQL procedure:

CREATE PROCEDURE [dbo].[Customer_LoadByIdsIn]
(
 @ids [dbo].[cf_type_Customer_LoadByIdsIn_0] READONLY
)
AS
SET NOCOUNT ON
DECLARE @_c_ids int; SELECT @_c_ids= COUNT(*) FROM @ids
SELECT DISTINCT
    [Customer].[Customer_Id],
    [Customer].[Customer_Name],
    [Customer].[Customer_DateOfBirth]
FROM [Customer]
WHERE [Customer].[Customer_Id] IN (((SELECT * FROM @ids)))

RETURN
GO

 

 // John and Jane
CustomerCollection.LoadByIds(new int[] { 1, 2 });
 // Empty result set
CustomerCollection.LoadByIds(new int[0]);

Equals (=)

 

Using the equals (=) operator you can exclude values that aren’t included in a list, but unlike the IN operator, when the list is empty no filter is applied. For instance:

load(guid[] ids) WHERE Id = @ids

The generated SQL procedure:

CREATE PROCEDURE [dbo].[Customer_LoadByIdsEquals]
(
 @ids [dbo].[cf_type_Customer_LoadByIdsEquals_0] READONLY
)
AS
SET NOCOUNT ON
DECLARE @_c_ids int; SELECT @_c_ids= COUNT(*) FROM @ids
SELECT DISTINCT
    [Customer].[Customer_Id],
    [Customer].[Customer_Name],
    [Customer].[Customer_DateOfBirth]
FROM [Customer] LEFT OUTER JOIN @ids AS _t_ids ON ((@_c_ids = 0)
OR (([Customer].[Customer_Id] = _t_ids.Item)))
WHERE ((@_c_ids = 0) OR (([Customer].[Customer_Id] = _t_ids.Item)))

RETURN
GO

 

 // John and Jane
CustomerCollection.LoadByIds(new int[] { 1, 2 });
 // John, Jane and Jimmy
CustomerCollection.LoadByIds(new int[0]);

Comparison operators: Like, StartsWith, greater than, freetext, etc.

 

You can use comparison operators between a single value and a TVP:

load(string[] names) WHERE Name STARTSWITH @names

The generated SQL procedure:

CREATE PROCEDURE [dbo].[Customer_LoadByNamesStartsWith]
(
 @names [dbo].[cf_type_Customer_LoadByNamesStartsWith_0] READONLY
)
AS
SET NOCOUNT ON
DECLARE @_c_names int; SELECT @_c_names= COUNT(*) FROM @names
SELECT DISTINCT
    [Customer].[Customer_Id],
    [Customer].[Customer_Name],
    [Customer].[Customer_DateOfBirth]
FROM [Customer] LEFT OUTER JOIN @names AS _t_names ON ((@_c_names = 0)
OR (([Customer].[Customer_Name] LIKE (_t_names.Item + '%'))))
WHERE ((@_c_names = 0)
OR (([Customer].[Customer_Name] LIKE (_t_names.Item + '%'))))

RETURN
GO

 

 // Jane, John
CustomerCollection.LoadByNamesStartsWith(new string[] { "Ja", "Jo" });
 // Jane, John and Jimmy
CustomerCollection.LoadByNamesStartsWith(new string[0]);


Custom

 

You can use RAW methods with Table Value Parameter. Here’s an example with inline SQL:

LOAD(string[] names)
WHERE [EXISTS (SELECT * FROM @names AS n WHERE $Name$ = n.Item)]

The generated SQL procedure:

CREATE PROCEDURE [dbo].[Customer_LoadCustom]
(
 @names [dbo].[cf_type_Customer_LoadCustom_0] READONLY
)
AS
SET NOCOUNT ON
DECLARE @_c_names int; SELECT @_c_names= COUNT(*) FROM @names
SELECT DISTINCT
    [Customer].[Customer_Id],
    [Customer].[Customer_Name],
    [Customer].[Customer_DateOfBirth]
FROM [Customer]
WHERE EXISTS (
SELECT * FROM @names AS n WHERE [Customer].[Customer_Name] = n.Item
)

RETURN
GO

 

 // Jane, John
CustomerCollection.LoadCustom(new string[] { "Jane", "John" });
 // Empty result set
CustomerCollection.LoadCustom(new string[0]);

Happy Querying,

The R&D Team.

SQL Server In-Memory OLTP

July 10, 2014 Leave a comment

In-Memory OLTP comes with Microsoft SQL Server 2014 and can significantly improve OLTP database application performance. It is a memory-optimized database engine integrated into the standard SQL Server engine. This system provides memory-optimized tables which are fully transactional and are accessed using class Translact-SQL instructions.

In-Memory Tables comes with some limitations. We won’t enumerate all but only those which are related to CodeFluent Entities:

  1. Foreign keys aren’t supported
  2. RowVersion and Timestamp columns aren’t supported: http://msdn.microsoft.com/en-us/library/dn133179.aspx
  3. Default constraints aren’t supported
  4. Some Transact-SQL constructs aren’t supported: http://msdn.microsoft.com/en-us/library/dn246937.aspx

Let’s handle those four points!

Foreign Keys

There are two options:

  • Don’t create relation :(
  • Create relation without foreign key :)

The second solution requires the usage of an Aspect. Fortunately we already wrote it a few time ago: http://www.softfluent.com/forums/codefluent-entities/how-to-disable-creating-foreign-key-by-sql-producer-

Even if foreign keys do not exist anymore, CodeFluent Entities still generates LoadBy_Relation methods so you won’t see any difference in your code. :)

Foreign Keys

RowVersion

RowVersion is not supported by In Memory tables so let’s remove it. We have to set “Concurrency Mode” to “None”:

RowVersion

 

Default Constraints

Default constraints used by tracking columns (creation time & last write time) are not supported. Here we have two options:

  • Remove default constraints :(
  • Move them into the Save stored procedure :)

The first option is available in the Property Grid at project or entity level by removing the tracking time columns:

Properties

The second option can be done with an Aspect as you can see in the full example (see below). The edited INSERT statement looks like:

    INSERT INTO [Customer] (
        [Customer].[Customer_Id],
        [Customer].[Customer_Name],
        [Customer].[Customer_ContactSource_Id],
        [Customer].[_trackCreationUser],
        [Customer].[_trackLastWriteUser],
        [Customer].[_trackLastWriteTime])
    VALUES (
        @Customer_Id,
        @Customer_Name,
        @Customer_ContactSource_Id,
        @_trackLastWriteUser,
        @_trackLastWriteUser,
        (GETDATE())) -- Default Value

Tansact SQL

By default the SQL Server Producer surround the procedure code with a transaction. This transaction isn’t supported when using In Memory Table. The following exception is thrown when calling the stored procedure:

Unhandled Exception: System.Data.SqlClient.SqlException: Accessing memory optimized tables using the READ COMMITTED isolation level is supported only for autocommit transactions. It is not supported for explicit or implicit transactions. Provide a supported isolation level for the memory optimized table using a table hint, such as WITH (SNAPSHOT).

To remove it, we have to configure the SQL Server to not produce it:

SQL Server

 

Migrate the table

After those small changes, we can migrate the table to an In Memory table:

Migration

Migration Result

 

We can now use the In Memory table from the application:

Customer customer = new Customer();
customer.Name = "John Doe";
customer.Save();

All-in-One method

All the previous steps are automated by an aspect. All you have to do is include the aspect and set “enabled” on tables:

SqlServer In Memory Aspect

The full code sample including the aspect is available on our GitHub repository.

The R&D Team

Using LocalDB with CodeFluent Entities

April 8, 2014 Leave a comment

With Microsoft SQL Server 2012, Microsoft has introduced a feature called LocalDB which is a new edition of SQL Express. LocalDB is created specifically for developers and it is much easier to install (no service) and manage than standard editions. Developers initiate a connection by using a special connection string. It supports AttachDbFileName property, which allows you to specify a database file location.

When connecting, the server is automatically created and started, enabling the application to use the database without complex configuration tasks. This edition uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server.

The installation of Visual Studio 2012 and 2013 includes LocalDB 2012 and you can download the SQL Express 2014 LocalDb edition directly from the MSDN.

The SqlLocalDB Utility help you to manage your LocalDb instances. The following command will list you all LocalDB versions installed on your computer:

SqlLocalDb-versions

And the existing LocalDB instances owned by the current user and all shared LocalDB instances:

SqlLocalDb-instances

To check on the status and other details about an instance, you can run:

SqlLocalDB-info

CodeFluent Entities Build 769 introduced the support of Microsoft SQL Server 2014 and gives you the opportunity to use SQL Server LocalDB (2012 and 2014) as your persistence server of your CodeFluent Entities application.

The SQL Server Producers allows you to generate your database layer on an SQL Server LocalDB instance:

SQL Server Producer LocalDb

Just build your model and connect to your LocalDb instance with SQL Server Management Studio or the Visual Studio Server Explorer. You can see that a new database has been created with the named you specified in the Connection String, and populated with the tables automatically inferred from your model as well as instances:

Server Explorer

Happy LocalDB-ing!

The R&D Team

Multi-database deployment with PowerShell and the Pivot Script Runner – Part 2

March 6, 2014 Leave a comment

In Part 1 of this article, we looked at using the PowerShell strengths to automate the process of updating several databases through the PivotRunner tool.

Now, we want to go further and create a PowerShell command, better known as a Cmdlet.

Build the Cmdlet

A Cmdlet can be built directly in a Powershell script, or through the .NET Framework. We need to inherit from System.Management.Automation.Cmdlet and define its naming attributes therefor.

By agreement, the name of a Cmdlet consists of a verb, followed by a dash and a name (e.g: Get-ChildItem andAdd-PSSnapIn):

using System.Management.Automation;

namespace CodeFluentEntitiesCmdlet
{
    [Cmdlet(VerbsData.Update, "CFEDatabase", SupportsShouldProcess = true, 
            ConfirmImpact = ConfirmImpact.High)]
    public class UpdateCFEDatabase : Cmdlet
    {
    }
}

Here, the Cmdlet’s name will be Update-CFEDatabase.

Use the following PowerShell command: Copy ([PSObject].Assembly.Location) C:\MyDllPath to find the System.Management.Automation library

SupportsShouldProcess and ConfirmImpact attributes allow the Cmdlet to use the PowerShell Requesting Confirmation feature.

The Cmdlet abstract class includes a fairly advanced command parameters engine to define and manage parameters:

[Parameter(Mandatory = true)]
public string ConnectionString { get; set; }

[Parameter(Mandatory = true)]
public string PivotFilePath { get; set; }

The Mandatory term is used to warn the command parameters engine of whether or not a parameter is required.

Cmdlet also exposes some methods which can be overriden. These pipeline methods allow the cmdlet to perform pre-processing operations, input processing operations, and post-processing operations.

Here, we’ll just override the ProcessRecord method:

protected override void ProcessRecord()
{
  // Process logic code
}

Then, we need to use the PivotRunner which is located in the CodeFluent.Runtime.Database assembly.

The tool takes the connection string and the pivot script producer output file as parameters:

using CodeFluent.Runtime;
using CodeFluent.Runtime.Database.Management.SqlServer;

private void UpdateDatabase()
{
    try
    {
        PivotRunner runner = new PivotRunner(PivotFilePath);

        runner.ConnectionString = ConnectionString;

        if (!runner.Database.Exists)
        {
            WriteObject("Error: The ConnectionString parameter does not lead to an existing database!");
            return;
        }
        runner.Run();
    }
    catch (Exception e)
    {
        WriteObject("An exception has been thrown during the update process: " + e.Message);
    }
}

Do not forget to reference CodeFluent.Runtime.dll and CodeFluent.Runtime.Database.dll!

Moreover, we can recover the PivotRunner output (internal logs) by providing an IServiceHost implementation:

public class CmdletLogger : IServiceHost
{
    private Cmdlet _cmdLet;

    public CmdletLogger(Cmdlet cmdlet)
    { 
        _cmdLet = cmdlet;
    }

    public void Log(object value)
    {
        _cmdLet.WriteObject(value);
    }
}

runner.Logger = new CmdletLogger(this);
runner.Run();

Powershell integration

The Cmdlet is now finished! :)

Now we’ll see how to call it from Powershell! Here, we have several options, but we shall see the PSSnapIn one.

The “Writing a Windows PowerShell Snap-in” article shows that a PSSnapIn is mostly a descriptive object which inherits from System.Configuration.Install.Installer and is used to register all the cmdlets and providers in an assembly.

So, let’s implement our Powershell snap-in:

using System.ComponentModel;
using System.Management.Automation;

namespace CodeFluentEntitiesCmdlet
{
    [RunInstaller(true)]
    public class CodeFluentEntitiesCmdletSnapin01 : PSSnapIn
    {
        public CodeFluentEntitiesCmdletSnapin01()
            : base() { }

        public override string Name
        {
            get { return ((object)this).GetType().Name; }
        }

        public override string Vendor
        {
            get { return "SoftFluent"; }
        }

        public override string VendorResource
        {
            get { return string.Format("{0},{1}", Name, Vendor); }
        }

        public override string Description
        {
            get { return "This is a PowerShell snap-in that includes the Update-CFEDatabase cmdlet."; }
        }

        public override string DescriptionResource
        {
            get { return string.Format("{0},{1}", Name, Description); }
        }
    }
}

Then, we build our solution which contains our Cmdlet and the PSSnapIn and finally register the built library thinks to the InstallUtil.exe (located in the installation folder of the .NET Framework):

Administrator rights are required.

Administrator rights are required.

By using the “Get-PSSnapIn –Registered” Powershell command, we can observe that our PSSnapIn is well registered. This component can now be used into your Powershell environment:

Get-PSSnapIn–Registered

The “Add-PSSnapIn” command enables us to use our Cmdlet into the current session of Powershell.

As result, we can update our previously built Powershell script:

param([string[]]$Hosts, [string]$PivotFilePath, [switch]$Confirm = $true)

Add-PSSnapin CodeFluentEntitiesCmdletSnapin01

if ($Hosts -eq $null -or [string]::IsNullOrWhiteSpace($PivotFilePath))
{
    Write-Error "Syntax: .\UpdateDatabase.ps1 -Hosts Host1[, Host2, ...] -PivotFilePath PivotFilePath"
    break
}

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

Write-Host "-========- Script started -========-"

$Hosts | foreach {
    $srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $_

    $online_databases = $srv.Databases | where { $_.Status -eq 1 -and $_.Name.StartsWith("PivotTest_") }
    
    if ($online_databases.Count -eq 0)
    {
        Write-Error "No database found"
        break
    }

    Write-Host "Database list:"
    $online_databases | foreach { Write-Host $_.Name }

    [string]$baseConnectionString = "$($srv.ConnectionContext.ConnectionString);database="
    $online_databases | foreach {
        Update-CFDatabase -ConnectionString "$($baseConnectionString)$($_.Name)" -PivotFilePath $PivotFilePath -Confirm:$Confirm
    }
}

Write-Host "-========-  Script ended  -========-"

We can now simply deploy all changes we’ve recently made on our databases thanks to the Cmdlet and PivotRunner components.

The source code is available for download.

Happy PowerShelling !

The R&D team

Multi-database deployment with PowerShell and the Pivot Script Runner – Part 1

February 13, 2014 2 comments

We are working on a solution designed with CodeFluent Entities, which stores and manages data within a SQL Server database created by the SQL Server Producer.
Sometime in the past, we deployed this solution on many servers and, now, we want to keep them all up to date.
Recently, we have made some important changes and we want to deploy them on our many databases.

How to set up and automate the process of updating a range of databases while preserving their content?

Pivot Script Producer

To answer this question, we have developed a new producer called “The Pivot Script Producer”. It provides the opportunity for generating one or more XML files which are a database snapshot of the current CodeFluent Entities project model.

Pivot Runner

These files, generated by the pivot script producer, are intended to be consumed by the PivotRunner tool of the CodeFluent.Runtime.Database library.
Using a connection string, it updates the targeted database from the files we have previously sent to him.

The New SQL Server Pivot Script producer article shows that we can directly use the PivotRunner API from the library. But, even easier, we can just call one of the provided programs: CodeFluent.Runtime.Database.Client.exe or CodeFluent.Runtime.Database.Client4.exe, located in the CodeFluent Entities installation folder.

At this stage, we can very easily and quickly update one database. But we still want to apply this process on several databases!

PowerShell Script

Let’s use the PowerShell strengths ! :)

Powershell is a scripting language developed by Microsoft and default running on any Windows system since Windows Seven. With a fully object-oriented logic and a very close relationship with the .NET Framework, it has become an essential and very simple and useful tool. And that’s why Powershell is so cool!

Thus, we could easily imagine a script that takes a list of servers as first parameter, and the generated files path as the second one to select and update the targeted databases (here, only the online databases whose name starts with “PivotTest_”).

param([string[]]$Hosts, [string]$PivotFilePath)

if ($Hosts -eq $null -or [string]::IsNullOrWhiteSpace($PivotFilePath))
{
    Write-Error "Syntax: .\UpdateDatabases.ps1 -Hosts Host1[, Host2, ...] -PivotFilePath PivotFilePath"
    break
}

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

Write-Host "-========- Script started -========-"

$Hosts | foreach {
    $srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $_

    $online_databases = $srv.Databases | where { $_.Status -eq 1 -and $_.Name.StartsWith("PivotTest_") }
    
    if ($online_databases.Count -eq 0)
    {
        Write-Error "No database found"
        break
    }

    Write-Host "Database list:"
    $online_databases | foreach { Write-Host $_.Name }

    [string]$baseConnectionString = "$($srv.ConnectionContext.ConnectionString);database="
    $online_databases | foreach {
        & “CodeFluent.Runtime.Database.Client.exe” “runpivot” $PivotFilePath "$($baseConnectionString)$($_.Name)"
}
}

Write-Host "-========-  Script ended  -========-"

The script above shows us how easily some .NET Framework features can be used. In particular the Microsoft.SqlServer.Management.Smo namespace that provides an intuitive way of SQL Server instances manipulation.

We could just call one of the CodeFluent Runtime Database programs described above, but the idea of using directly the PivotRunner through a custom PowerShell command is much more attractive!

Indeed, Powershell gives us that opportunity. These customs commands are called “Cmdlets” and can be built under C#.NET, as we will see later in the second part of this article :)

Happy deploying

The R&D team.

The new SQL Server Pivot Script producer

October 10, 2013 1 comment

A new producer is available since the latest builds!

Enter the “SQL Server Pivot Script” producer.

This producer purpose is to allow you to deploy CodeFluent Entities-generated SQL Server databases on any machines (production servers, etc.) much easier.

Before that, during development phases, the CodeFluent Entities SQL Server producer was already able to automatically upgrade live SQL Server databases using an integrated component called the Diff Engine. We all love this cool feature that allows us to develop and generate continuously without losing the data already existing in the target database (unlike most other environments…).

Now, this new producer provides the same kind of feature, but at deployment time.

It generates a bunch of files that can be embedded in your setup package or deployed somewhere on the target server. These files can then be provided as input to a tool named the PivotRunner. This tool will do everything needed to upgrade the database to the required state. It can create the database if it does not exist, add tables, columns, view, procedures, and keys where needed, etc. It will also add instances if possible.

Here is some diagram that recaps all this:

SQL Server Pivot Script Producer

SQL Server Pivot Script Producer


To use it at development/modeling time:

  • Add the SQL Server Pivot Script producer to your project and set the Target Directory to a directory fully reserved for the outputs this tool will create. Don’t use an existing directory, create a new one for this.
  • Once you have built the project, this directory will contain at least one .XML file, but there may be more (if you have instances and blob instances for example). If you set ‘Build Package’ to ‘true’ in the producer’s configuration, the output will always be one unique file with a .parc (pivot archive) extension.
  • Copy these files where you want, or add them to your setup projects.

Now, at deployment time you have two choices:

1) Use the provided tool (don’t develop anything).

Use the CodeFluent.Runtime.Database.Client.exe (CLR2) or CodeFluent.Runtime.Database.Client4.exe (CLR 4) binaries. Just copy them to your target machine. You will also need CodeFluent.Runtime.dll and CodeFluent.Runtime.Database.dll. The tool is a simple command line tool that takes the pivot directory or package file as input.

2) Use the PivotRunner API.

The tool in 1) also uses this API. It’s a class provided in CodeFluent.Runtime.Database.dll (you will also need to reference the CodeFluent.Runtime.dll). The PivotRunner class is located in the CodeFluent.Runtime.Database.Management.SqlServer namespace.
This is very easy:

            PivotRunner runner = new PivotRunner(pivotPath);
            runner.ConnectionString = "This is my SQL Server connection string";
            runner.Run();

If you need to log what happens, just gives it an instance of a logger, a class that implements IServiceHost (in CodeFluent.Runtime), for example:

        public class PivotRunnerLogger : IServiceHost
        {
            public void Log(object value)
            {
                Console.WriteLine(value);
            }
        }

What happens in the database during diff processing can also be logged, like this:

            PivotRunner runner = new PivotRunner(pivotPath);
            runner.Logger = new PivotRunnerLogger();
            runner.ConnectionString = "This is my SQL Server connection string";
            runner.Run();
            runner.Database.StatementRan += (sender, e) =>
                {
                    Console.WriteLine(e.Statement.Command);
                };

Note: This producer is still in testing phase, the forums are here if you need help!

Happy diffin’

The R&D team.

Follow

Get every new post delivered to your Inbox.

Join 56 other followers