How to improve data access layer select method Pattern

0 votes
asked Jan 12, 2009 by joel-coehoorn

Lately I find myself writing data access layer select methods where the code all takes this general form:

public static DataTable GetSomeData( ... arguments)
{
    string sql = " ... sql string here:  often it's just a stored procedure name ... ";

    DataTable result = new DataTable();

    // GetOpenConnection() is a private method in the class: 
    // it manages the connection string and returns an open and ready connection
    using (SqlConnection cn = GetOpenConnection())
    using (SqlCommand cmd = new SqlCommand(sql, cn))
    {
        // could be any number of parameters, each with a different type
        cmd.Parameters.Add("@Param1", SqlDbType.VarChar, 50).Value = param1; //argument passed to function

        using (SqlDataReader rdr = cmd.ExecuteReader())
        {
            result.Load(rdr);
        }
    }

    return result;
}

Or like this:

public static DataRow GetSomeSingleRecord( ... arguments)
{
    string sql = " ... sql string here:  often it's just a stored procedure name ... ";

    DataTable dt = new DataTable();

    // GetOpenConnection() is a private method in the class: 
    // it manages the connection string and returns an open and ready connection
    using (SqlConnection cn = GetOpenConnection())
    using (SqlCommand cmd = new SqlCommand(sql, cn))
    {
        // could be any number of parameters, each with a different type
        cmd.Parameters.Add("@Param1", SqlDbType.VarChar, 50).Value = param1; //argument passed to function

        using (SqlDataReader rdr = cmd.ExecuteReader(CommandBehavior.SingleRow))
        {
            dt.Load(rdr);
        }
    }

    if (dt.Rows.Count > 0)
         return dt.Rows[0];
    return null;
}

These methods would be called by business layer code that then converts the base DataTable or DataRecord into strongly typed business objects that the presentation tier can use.

Since I'm using similar code repeatedly, I want to make sure this code is the best it can be. So how can it be improved? And, is it worth trying to move the common code from this out to it's own method. If so, what would that method look like (specifically with regards to passing an SqlParameter collection in)?

7 Answers

0 votes
answered Jan 12, 2009 by joshberke

One pattern I've enjoyed looks like this as far as client code goes:

        DataTable data = null;
        using (StoredProcedure proc = new StoredProcedure("MyProcName","[Connection]"))
        {
            proc.AddParameter("@LoginName", loginName);
            data = proc.ExecuteDataTable();
        }

I usually make connection optional, and I will code in a way to pull it from ConnectionStrings config section or treat it as the actual connection string. This lets me reuse the dal in one off scenario's and is in part a habbit from the COM+ days when I stored the connection string using the object construction property.

I like this because it's easy to read and hides all the ADO code from me.

0 votes
answered Jan 12, 2009 by dfasdljkhfaskldjhfas

The only thing I do differently is I switched from my own internal database helper methods to the actual Data Access Application Block http://msdn.microsoft.com/en-us/library/cc309504.aspx

Makes it a little more standardized/uniform for other developers who know the enterprise library to ramp up on the code.

0 votes
answered Jan 12, 2009 by davgarcia

There are so many ways to implement the DBAL, in my opinion you are on the right path. Somethings to consider in your implementation:

  • You are using a factory-like method to create your SqlConnection, it is a minor point but you can do the same for your SqlCommand.
  • The parameter length is optional, so you can actually leave it out of the Parameter.Add call.
  • Create methods for adding parameters too, code sample below.

Add parameters using DbUtil.AddParameter(cmd, "@Id", SqlDbType.UniqueIdentifier, Id);

internal class DbUtil {

internal static SqlParameter CreateSqlParameter(
    string parameterName,
    SqlDbType dbType,
    ParameterDirection direction,
    object value
) {
    SqlParameter parameter = new SqlParameter(parameterName, dbType);

    if (value == null) {
        value = DBNull.Value;
    }

    parameter.Value = value;

    parameter.Direction = direction;
    return parameter;
}

internal static SqlParameter AddParameter(
    SqlCommand sqlCommand,
    string parameterName,
    SqlDbType dbType
) {
    return AddParameter(sqlCommand, parameterName, dbType, null);
}

internal static SqlParameter AddParameter(
    SqlCommand sqlCommand,
    string parameterName,
    SqlDbType dbType,
    object value
) {
    return AddParameter(sqlCommand, parameterName, dbType, ParameterDirection.Input, value);
}

internal static SqlParameter AddParameter(
    SqlCommand sqlCommand,
    string parameterName,
    SqlDbType dbType,
    ParameterDirection direction,
    object value
) {
    SqlParameter parameter = CreateSqlParameter(parameterName, dbType, direction, value);
    sqlCommand.Parameters.Add(parameter);
    return parameter;
    }
}
0 votes
answered Jan 12, 2009 by joel-coehoorn

Had to add my own:
Return DataReader from DataLayer in Using statement

The new pattern enables me to only have one record in memory at a time, but still encases the connection in a nice 'using' statement:

public IEnumerable<T> GetSomeData(string filter, Func<IDataRecord, T> factory)
{
    string sql = "SELECT * FROM [SomeTable] WHERE SomeColumn= @Filter";

    using (SqlConnection cn = new SqlConnection(GetConnectionString()))
    using (SqlCommand cmd = new SqlCommand(sql, cn))
    {
        cmd.Parameters.Add("@Filter", SqlDbType.NVarChar, 255).Value = filter;
        cn.Open();

        using (IDataReader rdr = cmd.ExecuteReader())
        {
            while (rdr.Read())
            {
                yield return factory(rdr);
            }
            rdr.Close();
        }
    }
}
0 votes
answered Jan 14, 2009 by dan-c

First, I think you already considered using an ORM vs. rolling your own. I won't go into this one.

My thoughts on rolling your own data access code:

  • Over time, I found it easier not to have separate DAL/BL objects, but rather merge them into a single object (some time later after reaching this conclusion I found out it's a pretty well known pattern - namely ActiveRecord). It might look nice and decoupled to have separate DAL assemblies, but the overhead in maintenance costs will add up. Everytime you add a new feature, you'll have to create more code/modify more classes. In my experience, the team that maintains the application is often way less than the original team of developers that built it, and they'll hate the extra work required.
  • For large teams, it might make sense to separate the DAL (and let a group work on it while the others. But this makes a good incentive for code bloat.
  • Coming down to your specific sample: how do you use the resulting DataTable? Iterate the rows, create typed objects and get the data from the row? If the answer is yes, think of the extra DataTable you created just for moving data between the DAL and the BL. Why not take it directly from the DataReader?
  • Also about the sample: if you return an untyped DataTable, then I guess you have to use the column names (of the result set the SP call returns) way up in the calling code. This means if I have to change something in the database, it might affect both layers.

My suggestion (I tried both methods - the suggestion is the latest working approach I came up with - it sort of evolved over time).

  • Make a base class for your typed business objects.
  • Keep object state in the base class (new, modified etc.)
  • Put the main data access methods in this class, as static methods. With a little effort (hint: generic methods + Activator.CreateInstance) you can create one business object per each row returned in the reader.
  • make an abstract method in the business object for parsing the row data (directly from the DataReader!) and fill the object.
  • make static methods in the derived business objects that prepare the stored proc parameters (depending on various filter criteria) and call the generic data access methods from the base class.

The aim is to end up with usage such as:

List<MyObject> objects = MyObject.FindMyObject(string someParam);

The benefit for me was that I only have to change one file in order to cope with changes in the database column names, types etc. (small changes in general). With some well thought regions, you can organize the code so that they're separate "layers" in the same object :). The other benefit is that the base class is really reusable from one project to another. And the code bloat is minimal (well, compared with the benefits. You could also fill datasets and bind them to UI controls :D

The limitations - you end up with one class per domain object (usually per main database table). And you can't load objects in existing transactions (although you could think of passing on the transaction, if you have one).

Let me know if you're interested in more details - I could expand the answer a bit.

0 votes
answered Jan 14, 2013 by nawfal

Similar to what I posted here

public IEnumerable<S> Get<S>(string query, Action<IDbCommand> parameterizer, 
                             Func<IDataRecord, S> selector)
{
    using (var conn = new T()) //your connection object
    {
        using (var cmd = conn.CreateCommand())
        {
            if (parameterizer != null)
                parameterizer(cmd);
            cmd.CommandText = query;
            cmd.Connection.ConnectionString = _connectionString;
            cmd.Connection.Open();
            using (var r = cmd.ExecuteReader())
                while (r.Read())
                    yield return selector(r);
        }
    }
}

I have these simple extension methods to aid ease of calling:

public static void Parameterize(this IDbCommand command, string name, object value)
{
    var parameter = command.CreateParameter();
    parameter.ParameterName = name;
    parameter.Value = value;
    command.Parameters.Add(parameter);
}

public static T To<T>(this IDataRecord dr, int index, T defaultValue = default(T),
                      Func<object, T> converter = null)
{
    return dr[index].To<T>(defaultValue, converter);
}

static T To<T>(this object obj, T defaultValue, Func<object, T> converter)
{
    if (obj.IsNull())
        return defaultValue;

    return converter == null ? (T)obj : converter(obj);
}

public static bool IsNull<T>(this T obj) where T : class
{
    return (object)obj == null || obj == DBNull.Value;
}

So now I can call:

var query = Get(sql, cmd =>
{
    cmd.Parameterize("saved", 1);
    cmd.Parameterize("name", "abel");
}, r => new User(r.To<int>(0), r.To<string>(1), r.To<DateTime?>(2), r.To<bool>(3)));
foreach (var user in query)
{

}

This is fully generic, fits any model that comply to ado.net interfaces. The connection object and reader is disposed only after the collection is enumerated once.

0 votes
answered Sep 15, 2017 by mohsen

The simplest Solution :

var dt=new DataTable();
dt.Load(myDataReader);
list<DataRow> dr=dt.AsEnumerable().ToList();
Welcome to Q&A, where you can ask questions and receive answers from other members of the community.
Website Online Counter

...