Application Metrics

Overview

The ApplicationMetrics project was included with MethodInvocationRemoting in version 1.3.0.0. It provides classes and interfaces to obtain metric and instrumentation information from a client application, and log it to a range of storage formats and dashboard applications (file, relational database, console, and the Windows Performance Monitor). It is a generic framework, independent from MethodInvocationRemoting and can be easily integrated with any other client application.

Background

In version 1.2.0.0 of MethodInvocationRemoting, I built the TcpRemoteSender/TcpRemoteReceiver and also RemoteSenderCompressor/RemoteReceiverDecompressor classes. After completing these I was interested in getting metric information out of them like the time taken to send messages over a network, the size of the messages sent, and in the case of the compression classes, the overall compression ratio.

I decided to look at writing a reusable framework to capture and record the metric information, along the lines of the ApplicationLogging framework. The main goals I tried to achieve were...

  1. To design clean, simple interfaces to record the metric information, so that the details of how the metrics were captured and recorded was separated from the client code (e.g. code dealing with message processing inside the sender and receiver classes) as much possible.
  2. For the C# side code, I wanted an option to log metrics using Windows performance counters, and have these visible in the Windows Performance Monitor application. The .NET framework already provides a set of classes to interact with performance counters in the System.Diagnostics namespace (PerformanceCounter, PerformanceCounterType, etc...). However, I found the following issues when trying to use these classes directly...
    • There are a lot of different members in the PerformanceCounterType enumeration, and understanding which counter type to use for which purpose is not always straightforward
    • There are some anomalies of setting up and using the counters. For example when registering certain types of counters, they expect a base counter to be registered immediately after registering the main counter. If the order of registration is not properly followed, the counters simply fail to log without throwing an exception.
    • There are some constraints around counters that record the frequency of certain events and how they are displayed in the Performance Monitor. For example with the RateOfCountsPerSecond32 counter... if rate of these counters being logged is less than the polling frequency set in Performance Monitor, then no record of these counters appears in Performance Monitor.
    • Performance Monitor needs to be restarted after any counters are registered or re-registered. If it is not restarted, again no record of the counters appear in Performance Monitor.
  3. To also build interfaces which were generic enough to allow implementations other than the Windows performance counters. For example the interfaces should support implementations which write individual metric events to a relational database or other data store. Having metrics able to be stored in a database/datastore, and recorded using Windows performance counters, would allow both real-time viewing of metrics and instrumentation (through the Windows Performance Monitor), plus a complete detailed history of all metric and instrumentation events in a data store which could be post processed and analyzed (e.g. by SQL queries in the case of a relational database).

Summary of the Interfaces

IMetricLogger

The IMetricLogger interface defines the basic methods which a client application can use to log and record metric events. The table below gives an overview of each of the methods...

Method Description
Increment(CountMetric countMetric) This method should be used to record a single instance of a count metric event. A count metric represents an event where the number of occurences of the event is important (e.g. total number of occurences, or number of occurences over a period of time). A common example of a count metric in the MethodInvocationRemoting framework is the 'MessageSent' event which represents a single message being sent as part of a remote method invocation.
Add(AmountMetric amountMetric) This method should be used to record a single instance of an amount metric event, and the numeric amount associated with the event. An amount metric represents an event where the total or average amount aggregated across multiple instances of the event is important. In MethodInvocationRemoting, a common use of amount metrics is to record the total size of a sent message. The average size of each message sent, or the total of all messages sent is useful instrumentation information.
Set(StatusMetric statusMetric) This method is used to record a single instance of a status metic event and the numeric value associated with the event. A status metric represents an event where the associated value is important to know for just the single instance of that event, but where totalling or averaging the value over multiple instances is not so useful. The status metric is not used in MethodInvocationRemoting, but a typical client use case might be to record the total memory used by an application. This value could be logged periodically to show a trend over time, however the total of multiple instances of the metric has little meaning.
Begin(IntervalMetric intervalMetric) This method is used to record the start of a single instance of an interval metric event. An interval metric represents an event where the time taken to complete the event is important. A common use of an interval metric in MethodInvocation remoting is to record the time taken to send a message. The interval metric is recorded by calling the Begin() method before starting to send the message, and then caling the End() method once the send is complete.
End(IntervalMetric intervalMetric) This method is used to record the end/completion of a single instance of an interval metric event.
CancelBegin(IntervalMetric intervalMetric) This method is used to cancel the starting of an interval metric event. A typical use case would be where an unexpected exception occurs after having called Begin(). The exception handler routine would call CancelBegin() to ensure the class implementing IMetricLogger is kept in a consistent state.

An example client code use case of the IMetricLogger interface is shown in the below extract from the Receive() method in the MethodInvocationRemoting FileRemoteReceiver class in C# (Note in this case a metricsUtilities object is used to proxy the method calls to an implementation of IMetricLogger to allow for enabling/disabling of metric logging via compilation options. The methods exposed by metricsUtilities are however the same as IMetricLogger)...

while (cancelRequest == false) { if (fileSystem.CheckFileExists(messageFilePath) == true) { if (fileSystem.CheckFileExists(lockFilePath) == false) { metricsUtilities.Begin(new MessageReceiveTime()); try { returnMessage = messageFile.ReadAll(); fileSystem.DeleteFile(messageFilePath); } catch (Exception e) { metricsUtilities.CancelBegin(new MessageReceiveTime()); throw e; } metricsUtilities.End(new MessageReceiveTime()); metricsUtilities.Increment(new MessageReceived()); metricsUtilities.Add(new ReceivedMessageSize(returnMessage.Length)); loggingUtilities.LogMessageReceived(this, returnMessage); break; } } else { waitingForTimeout = true; if (readLoopTimeout > 0) { System.Threading.Thread.Sleep(readLoopTimeout); } waitingForTimeout = false; } }

Design Choices

I spent some time deciding on the best way to represent and implement the actual metrics and metric events. The 2 favoured options were to either represent each metric as a class, or to just identify each metric by a string (in which case the IMetricLogger interface would accept strings rather than classes deriving from MetricBase as its method parameters). The class option would mean that a specific class would need to be created for each individual metric, which would be a significant implementation cost for any client code using ApplicationMetrics. The string option however would mean classes for metrics would not need to be predefined (although metrics would need to be setup at build time to associate the string representing the metric with properties like name and description). Ultimately the class option was chosen due to the greater safety it provided. Using strings represented a risk, as a misspelling of the metric name could cause a failure in the framework that would only be detected at runtime. Using classes on the other hand, meant that similar errors would be caught at compile time.

Most likely it would have been more correct from a class modelling perspective to use separate classes to represent metric definitions and metric instances. However this would have meant that a lot of classes would have needed to be built up front, and would have further increased the implementation cost. As it stands a single class deriving from the MetricBase base class represents both the definition and a single instance of a given metric. For example, an instance of the RemoteMethodSent class in MethodInvocationRemoting represents both the properties of the metric (like its name and description), and that an actual remote method was sent.

IMetricAggregateLogger

The IMetricAggregateLogger interface provides methods which allow the definition of aggregates of metrics. Classes implementing this interface use these definitions to capture and record aggregates of underlying metrics instances. These instances would usually be logged through an implementation of the IMetricLogger interface.

The interface defines 6 overloads of just 1 method DefineMetricAggregate(). The table below explains the metric aggregates which can be defined through this method...

Method Description
DefineMetricAggregate(CountMetric countMetric, TimeUnit timeUnit, string name, string description) Defines a metric aggregate which represents the number of occurrences of a count metric within the specified time unit. An example of this aggregate in the MethodInvocationRemoting project is the 'RemoteMethodsSentPerMinute' aggregate, which records the number of instances of the RemoteMethodSent metric occurring each minute.
DefineMetricAggregate(AmountMetric amountMetric, CountMetric countMetric, string name, string description) Defines a metric aggregate which represents the total amount of the specified amount metric per occurrence of the specified count metric. An example of this aggregate in the MethodInvocationRemoting project is the 'BytesSentPerRemoteMethodSent' aggregate, which records the total of the amount of all instances of metric SerializedMethodInvocationSize, divided by the number of occurences of the RemoteMethodSent metric.
DefineMetricAggregate(AmountMetric amountMetric, TimeUnit timeUnit, string name, string description) Defines a metric aggregate which represents the total amount of the specified amount metric within the specified time unit. An example of this aggregate in the MethodInvocationRemoting project is the 'BytesReceivedPerSecond' aggregate, which records the total of the amount of all instances of metric ReceivedMessageSize, divided by the number of seconds that the logger has been operating (i.e. since the Start() method was called).
DefineMetricAggregate(AmountMetric numeratorAmountMetric, AmountMetric denominatorAmountMetric, string name, string description) Defines a metric aggregate which represents the ratio of one amount metric to another. An example of this aggregate in the MethodInvocationRemoting project is metric the 'CompressionRatio' aggregate, which records the total of the amount of all instances of metric CompressedStringSize divided by the total amount of all instances of metric SerializedReturnValueSize.
DefineMetricAggregate(IntervalMetric intervalMetric, CountMetric countMetric, string name, string description) Defines a metric aggregate which represents the total time of the specified interval metric per occurrence of the specified count metric. An example of this aggregate in the MethodInvocationRemoting project is the 'MessageSendTimePerRemoteMethodSent' aggregate, which records the total of the intervals of all instances of metric MessageSendTime divided by the total number of instances of metric RemoteMethodSent.
DefineMetricAggregate(IntervalMetric intervalMetric, string name, string description) Defines a metric which represents the total time of all occurrences of the specified interval metric as a fraction of the total runtime of the logger. An example of this aggregate in the MethodInvocationRemoting project is the 'TimeSpentCompressingReturnValues' aggregate, which records the total of the intervals of all instances of metric StringCompressTime divided by the time that the logger has been operating (i.e. since the Start() method was called).

See MethodInvocationRemoting SampleApplication5 project for example of how metrics aggregates are defined and used.

Summary of the Classes

The class diagram below shows the main class hierarchy in ApplicationMetrics. Note that this diagram represents the C# class hierarchy where the leaf-level, useable classes are private and have an 'Implementation' suffix on the name (see below for an explanation of this). The same hierarchy applies for Java, without the 'Implementation' suffix in the name of the leaf-level classes (and without the PerformanceCounterMetricLogger class which is not implemented in Java).

ApplicationMetrics Class Diagram

Base classes / Abstract classes

MetricLoggerBuffer

This abstract class is the lowest level base class of all implementations of the IMetricLogger interface. It provides implementations of the methods defined in IMetricLogger, which store the logged metric events in queues. A number of abstract methods must be implemented in classes derived from MetricLoggerBuffer, which define how the queued metric events should be processed (e.g. ProcessCountMetricEvent()). For example, in the FileMetricLogger class, the ProcessCountMetricEvent() method writes details of queued count metrics to a file. Implementations of IMetricLogger which do not require totaling or aggregating can derive directly from this class (FileMetricLogger and MicrosoftAccessMetricLogger are examples of this).

As of ApplicationMetrics version 1.4.0.0, the logic to decide when to process the contents of the queues has been decoupled from the MetricLoggerBuffer class, and moved into classes implementing interface IBufferProcessingStrategy. This design follows a strategy pattern, whereby an instance of IBufferProcessingStrategy is injected into the constructor of the MetricLoggerBuffer class. Two classes implementing IBufferProcessingStrategy are provided: LoopingWorkerThreadBufferProcessor and SizeLimitedBufferProcessor (see details below). Interaction between MetricLoggerBuffer and IBufferProcessingStrategy implementations is via method calls and raising events. IBufferProcessingStrategy implementations are notified of the logging of metric events via methods like NotifyCountMetricEventBuffered(). IBufferProcessingStrategy implementations raise event 'BufferProcessed' to instruct the MetricLoggerBuffer class to process its queues (in Java this is done via calling method IBufferProcessedEventHandler.BufferProcessed()).

The provided implementations of the IBufferProcessingStrategy perform the dequeuing and processing of the metric events on a worker thread. As such, the methods defined on IMetricLogger to capture the metric events like Increment() require minimal CPU cycles (just enough to set a lock, and put the metric event on a queue), and this means metric logging causes minimal delay and interruption to the client code.

The MetricLoggerBuffer class maintains separate queues for each type of metric event (i.e. different queues for count metric, status metrics, etc...), and the logging and processing of the queues occur on separate threads. Because of this it's possible that the ordering of metrics logged does not exactly match the actual order they occurred in. However, timestamps of all metric events are captured, so it's recommended to sort logged metrics by their timestamp to see the correct order.

For interval metrics, the timestamp logged against the metric is the timestamp when the Begin() method was called, rather than the End() method.

MetricLoggerStorer

This abstract class derives from MetricLoggerBuffer, and provides additional functionality to store the counts and totals of all logged metric event instances. Abstract methods that must be implemented in derived classes define how the totals of the metric instances should be logged (e.g. method LogCountMetricTotal()).

MetricAggregateLogger

This abstract class derives from MetricLoggerStorer, and additionally implements interface IMetricAggregateLogger. It supports the defining of aggregates of metrics, and contains internal container objects to store the aggregate definitions. Deriving classes must implement abstract methods like LogCountOverTimeUnitAggregate(), which define how the individual aggregates of the metrics should be logged.

'Implementation' classes in C#

For clients using a compiled dll of ApplicationMetrics, I of course wanted the useable, non-abstract classes like FileMetricLogger and PerformanceCounterMetricLogger to be public in the project. However I wanted to make the abstract classes they were deriving from (like MetricLoggerBuffer and MetricAggregateLogger) private, to keep the compiled interface clean. As the project is open source, if someone wanted to derive their own class from MetricLoggerBuffer or MetricAggregateLogger they could just download the source, create their classes within the project and compile themselves. Java allows a derived class to be public while its base class is private, but C# does not. To work around this in C#, all useable classes have an 'Implementation' suffix in the class name. The implementation class is private in the project and derives from the relevant private base classes. The public class has no 'Implementation' suffix and uses a composition pattern to proxy all method calls to the 'Implementation' class inside it. The ConsoleMetricLogger and ConsoleMetricLoggerImplementation classes demonstrate this design.

WorkerThreadBufferProcessorBase

This abstract class provides common functionality for classes implementing interface IBufferProcessingStrategy, which use a worker thread to process the metric events stored in the queues in class MetricLoggerBuffer.

Public Classes

MicrosoftAccessMetricLogger

This class provides a simple example of how the IMetricLogger interface could be implemented for a relational database. The class derives from MetricLoggerBuffer and simply writes metric events directly to tables in an Access database. SQL can then be used to order, aggregate, and total the metrics. A sample Access database with the correct schema setup is included in the \Resources folder in the MethodInvocationRemoting source code. The lookup tables (e.g. CountMetrics, AmountMetrics) are preconfigured with the relevant metrics generated by the MethodInvocationRemoting classes. If using ApplicationMetrics with other client code, the lookup tables should be populated with the applicable metric entries for that client. Also ensure that the Categories table is populated with the same category entry that is passed to the constructor of the MicrosoftAccessMetricLogger class. If the correct categories and metrics are not entered into these tables, the class will fail to store the metrics (and will not throw any exception).

The classes in C# and Java have only been tested in a 32 bit Windows environment. It's possible that they may need some adjustment to work correctly in a 64 bit Windows environment (e.g. the connection string may need to be updated). Additionally since ApplicationMetrics version 1.4.0.0, the Java version of MicrosoftAccessMetricLogger no longer functions, due to the removal of the JDBC-ODBC bridge in Java 1.8. Another issue is that the granularity of the timestamp in Microsoft Access is only to the closest second. As metric events can be logged out of order by the MetricLoggerBuffer base class, the lack of granularity means that the correct order of metric events can be lost. Finally, testing of the class showed quite high memory accumulation in both C# and Java (leveling out at around 20mb memory use in a simple test harness). Again, the class is meant to serve as an example only, and it's expected that for production cases the IMetricLogger interface would be implemented for a robust, large-scale relational database platform like Oracle, SQL Server, or MySQL.

FileMetricLogger

This class provides a simple implementation of the IMetricLogger interface for testing. Metric events are written directly to the file specified in the class constructor. As described above, due to logging and processing of metric events occurring on separate threads, the metric events may be written to the file out of order. The file should be ordered by time stamp to restore the correct order.

ConsoleMetricLogger

The ConsoleMetricLogger class provides a mechanism to log metrics to the console. Unlike the File and Microsoft Access versions, it derives from the MetricLoggerStorer and MetricAggregateLogger classes, and so totals the instances and amounts of metrics, and also allows for the definition and logging of aggregates of the base metrics. Rather than logging individual metrics directly to the console, it logs the totals, and any defined metrics once each time the worker thread iterates. While the File and Microsoft Access metric loggers provide a detailed, granular, historic log of metrics, the ConsoleMetricLogger provides more of a real-time snapshot of the totals and aggregates of metrics, and hence of the performance and behaviour of the client code or application. Metric aggregates can be defined/registered using the DefineMetricAggregate() method from the MetricAggregateLogger base class.

PerformanceCounterMetricLogger

This class logs metrics using Windows performance counters. Like the ConsoleMetricLogger class it derives from MetricAggregateLogger, and hence can log totals of metrics and aggregates of metrics to Windows performance counters which can then be viewed though the Windows Performance Monitor application. It allows real-time viewing of the activity and performance of a client application.

Any metrics or metric aggregates to be logged must be explicitly registered via the RegisterMetric() or DefineMetricAggregate() methods. Additionally counters corresponding to the metric and metric aggregates need to be created in the operating system. The class offers this functionality through the CreatePerformanceCounters() method, which should be called after all metrics and aggregates have been registered. Note however, that creating the counters in the operating system only needs to be performed once, provided the metrics and aggregates don't change between instances of using the PerformanceCounterMetricLogger. Note also that the class needs to run with appropriate administrative privileges to be able to create performance counters in the operating system.

For several of the metric aggregate types (e.g. whose which aggregate over a time unit), the PerformanceCounterMetricLogger class will log two performance counters for a single aggregate. The name of the first counter will be the same as that defined for the metric aggregate (i.e. specified in the call to the DefineMetricAggregate() method), but an additional counter will also be logged with the postfix 'Instantaneous'. The standard counter uses the metric totaling functionality in the MetricLoggerStorer class. During runtime of the class, the logging of aggregates will become progressively less sensitive to sudden changes in the amount or frequency of metrics (the stored totals will become larger and larger compared to the value of the metric instances). This could mean that a sudden change in an important metric (e.g. frequency or size of messages received) may not be noticed for some time using the standard counter. The 'Instantaneous' counter however uses a different underlying Windows performance counter type, and will react immediately to sudden changes in amount or frequency of metrics. This is because the 'Instantaneous' counter aggregates over a short time window (defined by the polling interval of the application reading and displaying the counters like the Windows Performance Monitor), whereas the MetricLoggerStorer class is aggregating over the entire runtime of the class. One important caveat when using the Windows Performance Monitor however, is that the polling interval needs to be less than or equal to the time unit of the metric aggregate for the associated counter to be displayed. For example, if an aggregate is defined with a denominator of TimeUnit.Minute, the polling interval in Windows Performance Monitor must be set to a minimum of 60 seconds or the counter will not show in the performance Monitor.

Examples of the setup and use cases of the PerformanceCounterMetricLogger class can be found in the MethodInvocationRemoting SampleApplication5 project.

LoopingWorkerThreadBufferProcessor

This class implements the IBufferProcessingStrategy interface, and processes the contents of the metric event queues at a set interval, specified by constructor parameter 'dequeueOperationLoopInterval'. This class is suitable for use with metric logger classes that update a user interface or dashboard (e.g. ConsoleMetricLogger and PerformanceCounterMetricLogger), as the user interface will be updated at a regular interval.

SizeLimitedBufferProcessor

This class implements the IBufferProcessingStrategy interface, and processes the contents of the metric event queues when the total number of queued metric events reaches a given limit (specified by constructor parameter 'bufferSizeLimit'). As the frequency of the buffer processing scales linearly with the volume of metrics being logged, CPU cycles are not wasted potentially processing the queues when they are empty or contain few metric events (as can be the case with the LoopingWorkerThreadBufferProcessor class). The class is suitable for use with metric loggers which can take advantage of this efficiency gain, and do not require that the queues be processing at a fixed interval... i.e. classes which log metrics more for post-processing and aggregation rather than immediate visualisation (FileMetricLogger and MicrosoftAccessMetricLogger are examples of such classes).

Limitations

The timestamps of when metrics occur as well as the duration of interval metrics are recording using the DateTime.UtcNow method in C#, and the GregorianCalendar class in Java. Both of these classes have some limitations regarding their accuracy down to the millisecond level. Fast machines can achieve millisecond accuracy, but on some machines the DateTime.UtcNow method may only be accurate to around 30ms. If greater precision is required, the interfaces could be implemented using more accurate underlying classes like the System.Diagnostics.Stopwatch class in C#.

Unit Tests

ApplicationMetrics includes an extensive set of unit tests in both C# and Java. The majority of these unit tests were written prior to ApplicationMetrics version 1.4.0.0, when the worker thread performing the queue processing was embedded in MetricLoggerBuffer class (since version 1.4.0.0 the same worker thread functionality has been moved to class LoopingWorkerThreadBufferProcessor)*. Writing unit tests around code with an embedded worker thread presented several challenges...

  • As the worker thread repeatedly iterates through a loop to process and handle any logged metric events, how to write deterministic unit tests where the exact number of calls to mocks could be tested (i.e. how to guarantee how many times the loop would iterate)?
  • How to catch and test for exceptions which occur on worker threads? The NUnit framework used for C# unit tests does not catch exceptions occurring on worker threads. Similarly in Java, exceptions which occur on worker threads are not automatically thrown on the parent thread. Instead they require that an UncaughtExceptionHandler be set on the worker thread to handle any exceptions that occur.

To work around the problem of writing deterministic tests, I put mechanisms into both the C# and Java code to allow the worker thread to only iterate once under test conditions. In Java, the overloaded test constructor on the LoopingWorkerThreadBufferProcessor class requires a CountDownLatch object. In tests, after the first iteration of the loop the CountDownLatch is used to signal any waiting threads, and the loop is exited. This behaviour is defined in the private class LoopingWorkerThread inside the LoopingWorkerThreadBufferProcessor class...

@Override public void run() { try { while (cancelRequest == false) { bufferProcessedEventHandler.BufferProcessed(); if (dequeueOperationLoopInterval > 0) { Thread.sleep(dequeueOperationLoopInterval); } // If the code is being tested, allow only a single iteration of the loop if (testConstructor == true) { dequeueOperationLoopCompleteSignal.countDown(); break; } } catch (Exception e) { throw new RuntimeException("Exception in buffer processing thread.", e); } } if (outerClass.getTotalMetricEventsBufferred() > 0 && processRemainingBufferredMetricsOnStop == true) { try { bufferProcessedEventHandler.BufferProcessed(); } catch (Exception e) { throw new RuntimeException("Exception processing buffers when exiting buffer processing thread.", e); } } }

In the unit test code, the await() method is called on the CountDownLatch object between calling the methods on the test class, and verifying the results (as demonstrated below in method IncrementSuccessTest() in the tests for class MicrosoftAccessMetricLogger)...

testMicrosoftAccessMetricLogger.Increment(new TestMessageReceivedMetric()); testMicrosoftAccessMetricLogger.Increment(new TestDiskReadOperationMetric()); testMicrosoftAccessMetricLogger.Increment(new TestMessageReceivedMetric()); testMicrosoftAccessMetricLogger.Start(); dequeueOperationLoopCompleteSignal.await(); verify(mockCalendarProvider, times(3)).getCalendar(TimeZone.getTimeZone(timeZoneId)); verify(mockStatement).execute(firstExpectedSqlStatement.toString()); verify(mockStatement).execute(secondExpectedSqlStatement.toString()); verify(mockStatement).execute(thirdExpectedSqlStatement.toString()); verifyNoMoreInteractions(mockCalendarProvider, mockConnection, mockStatement);

This ensures that the results are not verified before the worked thread has iterated, and also ensures that the worker thread will not iterate multiple times before the results are verified.

In C# the 'Signal' feature of NMock2 is utilised to synchronise the worker thread under test, and the thread running the unit tests. On the last Expect defined in the test, the NMock2 Signal action is specified to Set() an AutoResetEvent. The thread running the test code waits on the same AutoResetEvent between calling the methods on the test class, and verifying the results.

using (mocks.Ordered) { SetConnectExpectations(); Expect.Once.On(mockDateTime).GetProperty("UtcNow").Will(Return.Value(new System.DateTime(2014, 6, 14, 12, 45, 31))); Expect.Once.On(mockDateTime).GetProperty("UtcNow").Will(Return.Value(new System.DateTime(2014, 6, 14, 12, 45, 43))); Expect.Once.On(mockDateTime).GetProperty("UtcNow").Will(Return.Value(new System.DateTime(2014, 6, 15, 23, 58, 47))); Expect.Once.On(mockDbCommand).SetProperty("CommandText").To(firstExpectedSqlStatement.ToString()); Expect.Once.On(mockDbCommand).Method("ExecuteNonQuery").WithNoArguments().Will(Return.Value(1)); Expect.Once.On(mockDbCommand).SetProperty("CommandText").To(secondExpectedSqlStatement.ToString()); Expect.Once.On(mockDbCommand).Method("ExecuteNonQuery").WithNoArguments().Will(Return.Value(1)); Expect.Once.On(mockDbCommand).SetProperty("CommandText").To(thirdExpectedSqlStatement.ToString()); Expect.Once.On(mockDbCommand).Method("ExecuteNonQuery").WithNoArguments().Will(Return.Value(1), Signal.EventWaitHandle(workerThreadLoopCompleteSignal)); } testMicrosoftAccessMetricLogger.Connect(); testMicrosoftAccessMetricLogger.Increment(new TestMessageReceivedMetric()); testMicrosoftAccessMetricLogger.Increment(new TestDiskReadOperationMetric()); testMicrosoftAccessMetricLogger.Increment(new TestMessageReceivedMetric()); testMicrosoftAccessMetricLogger.Start(); workerThreadLoopCompleteSignal.WaitOne(); mocks.VerifyAllExpectationsHaveBeenMet(); Assert.IsNull(exceptionStorer.StoredException);

Similar to Java, when the MetricLoggerBuffer class is constructed using the test constructor, the worker thread loop is stopped after a single iteration...

while (cancelRequest == false) { OnBufferProcessed(EventArgs.Empty); if (dequeueOperationLoopInterval > 0) { Thread.Sleep(dequeueOperationLoopInterval); } // If the code is being tested, allow only a single iteration of the loop if (unitTestMode == true) { break; } }

* Note - as the worker thread functionality has now been moved into implementations of the IBufferProcessingStrategy interface, the unit tests could be refactored and simplified to mock IBufferProcessingStrategy, instead of using a real LoopingWorkerThreadBufferProcessor class.

Anomalies

As with the ApplicationLogging and MethodInvocationRemoting projects, ApplicationMetrics uses the OperatingSystemAbstraction project to create mockable interfaces which wrap classes in the .NET framework and Java. For the MicrosoftAccessMetricLogger class in C#, classes were added to OperatingSystemAbstraction to wrap the OleDbConnection and OleDbCommand classes (in the System.Data.OleDb namespace). One difficulty with wrapping these classes was that at setup you are required to create a link between the connection and the command. This means that either in the wrapped interface or implementation class you need to expose the actual underlying System.Data.OleDb.OleDbConnection to set it on the OleDbCommand. Rather than expose this on the IOleDbConnection interface, I created a custom property on the OperatingSystemAbstraction.OleDbConnection class called Connection which returns the underlying System.Data.OleDb.OleDbConnection object. Inside the MicrosoftAccessMetricLoggerImplementation class, the connection is defined as an IOleDbConnection interface, on which the Connection property is not available. Hence when constructing the IOleDbCommand object, the IOleDbConnection must be downcast to an OleDbConnection...

dbConnection = new OleDbConnection(); // Need to cast the connection to an OleDbConnection in order to get the underlying connection // Usually would avoid casting, but in this case it should be safe, as the member was just set to this object type on the previous line dbCommand = new OleDbCommand("", ((OleDbConnection)dbConnection).Connection);

In Java the Connection and Statement are already interfaces, and hence the Java version of the MicrosoftAccessMetricLogger can be unit tested without the need for wrapping classes and interfaces.

The PerformanceCounterMetricLogger class in C# needs to create multiple PerformanceCounter classes (from the System.Diagnostics namespace) to allow interaction with the operating system. If new instances of these classes are created directly within the PerformanceCounterMetricLoggerImplementation class, the PerformanceCounter classes can't be mocked and tested as part of the units tests. To work around this I created an interface IPerformanceCounterFactory and class PerformanceCounterFactory in the OperatingSystemAbstraction project. Instead of creating new instances of PerformanceCounter classes directly, they are passed back from a call to the Create() method on the PerformanceCounterFactory class. Additionally the Create() call returns a class implementing IPerformanceCounter interface, rather than a specific PerformanceCounter class. In the real use case, the call to Create() ends up returning a real System.Diagnostics.PerformanceCounter (wrapped in an OperatingSystemAbstraction.PerformanceCounter), but in the test case the call to Create() can be mocked, and a mocked IPerformanceCounter returned. This is demonstrated in the mocking of the call to the Create() method in the below Expect statements from test LogCountMetricTotalSuccessTest()...

using (mocks.Ordered) { Expect.Once.On(mockDateTime).GetProperty("UtcNow").Will(Return.Value(new System.DateTime(2014, 6, 26, 22, 49, 01))); Expect.Once.On(mockPerformanceCounterFactory).Method("Create").With(testMetricCategoryName, (new TestMessageReceivedMetric()).Name, false).Will(Return.Value(mockPerformanceCounter)); Expect.Once.On(mockDateTime).GetProperty("UtcNow").Will(Return.Value(new System.DateTime(2014, 6, 26, 22, 49, 03))); Expect.Once.On(mockPerformanceCounter).SetProperty("RawValue").To(1L).Will(Signal.EventWaitHandle(workerThreadLoopCompleteSignal)); }

Use in MethodInvocationRemoting

MethodInvocationRemoting makes extensive use of ApplicationMetrics to log detailed metric and instrumentation information from all main classes. The table below lists these classes and the metrics that they log...

Class Metric Metric Type What it Measures
ActiveMqRemoteReceiver MessageReceived Count A message was received via ActiveMQ
ReceivedMessageSize Amount The size of a message received via ActiveMQ
There is no metric for ActiveMQ receive time, as the call to the underlying IMessageConsumer.Receive() method includes waiting time. Hence recording the receive time based on this method would not be accurate.
ActiveMqRemoteSender MessageSendTime Interval The time taken to send a message via ActiveMQ
MessageSent Count A message was sent via ActiveMQ
FileRemoteReceiver MessageReceiveTime Interval The time taken to receive a message via the file system
MessageReceived Count A message was received via the file system
ReceivedMessageSize Amount The size of a message received via the file system
FileRemoteSender MessageSendTime Interval The time taken to send a message via the file system
MessageSent Count A message was sent via the file system
MethodInvocationRemoteReceiver RemoteMethodReceiveTime Interval The time taken to receive and invoke a remote method, including deserialization, invoking the method locally, and sending any return value (or completion notification in the case of a void method). Note this metric does not include the time taken to receive the message by the IRemoteReceiver class.
RemoteMethodReceived Count A remote method was received and invoked
MethodInvocationRemoteSender RemoteMethodSendTime Interval The time taken to send and invoke a remote method, including serialization, invoking the method remotely, and receiving any return value (or completion notification in the case of a void method).
RemoteMethodSent Count A remote method was sent and invoked
MethodInvocationSerializer MethodInvocationSerializeTime Interval The time taken to serialize a method invocation
MethodInvocationSerialized Count A method invocation was serialized
SerializedMethodInvocationSize Amount The size* of a serialized method invocation
MethodInvocationDeserializeTime Interval The time taken to deserialize a method invocation
MethodInvocationDeserialized Count A method invocation was deserialized
ReturnValueSerializeTime Interval The time taken to serialize a return value
ReturnValueSerialized Count A return value was serialized
SerializedReturnValueSize Amount The size* of a serialized return value
ReturnValueDeserializeTime Interval The time taken to deserialize a return value
ReturnValueDeserialized Count A return value was deserialized
RemoteReceiverDecompressor StringDecompressTime Interval The time taken to decompress a string
RemoteReceiverDecompressorReadBufferCreated Count A new read buffer was created while decompressing a string
StringDecompressed Count A string was decompressed
RemoteSenderCompressor StringCompressTime Interval The time taken to compress a string
StringCompressed Count A string was compressed
CompressedStringSize Amount The size* of a compressed string
TcpRemoteReceiver TcpRemoteReceiverReconnected Count The TcpRemoteReceiver class received a reconnect request and successfully reconnected
MessageReceiveTime Interval The time taken to receive a message via a TCP network. Note that for small messages, the operating system will have typically buffered the message before the read process starts. As such this metric often does not include the time to physically read the data from the network, and hence the value recorded is often much less than for the 'MessageSendTime' metric in the TcpRemoteSender class.
MessageReceived Count A message was received via a TCP network
ReceivedMessageSize Amount The size* of a message received via a TCP network
TcpRemoteReceiverDuplicateSequenceNumber Count The TcpRemoteReceiver class received a message with a duplicate sequence number and discarded the message
TcpRemoteReceiverReadBufferCreated Count A new read buffer was created while receiving a message via a TCP network (only for the Java version of this class)
TcpRemoteSender MessageSendTime Interval The time taken to send a message via a TCP network
MessageSent Count A message was sent via a TCP network
TcpRemoteSenderReconnected Count The TcpRemoteReceiver class successfully reconnected after encountering a network error

All Interval metrics are recorded in milliseconds

The classes implementing IMessageSender do not record the message size. The reason for this is that the message size is recorded by the MethodInvocationSerializer and RemoteSenderCompressor classes, and hence can easily be obtained from these metrics.

* In these cases the size is recorded as the length of the string holding the entity. The actual size in terms of bytes will depend on the characters in the string and the encoding scheme used.

For the TcpRemoteReceiver class, the recorded size represents the message body only, and does not include any message header or delimiter characters.

See MethodInvocationRemoting SampleApplication5 project for details of how metrics are logged and viewed from MethodInvocationRemoting.