Today, monitoring data analysis is an underrated but important process in tech industries. Almost, every industry gathers and analyzes monitoring data to improve offered services or to predict critical issues in advance. However, the monitoring data constitutes V's of big data (i.e. Volume, Variety, Velocity, Value, and Veracity). Exploration of big monitoring data possess� several issues and challenges. Firstly, a wide range of monitoring data analysis tools are available and these tools offer a variety of features (i.e. functional and non-functional) that affect the analysis process. However, these features come with their own setbacks. Therefore, selection of a suitable monitoring data tools is challenging and difficult to decide. Secondly, the big monitoring data analysis process contains two main operations of querying and processing a large amount of data. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis form the knowledge of the system and should be shared and communicated. The contribution of this research study is two-fold. Firstly, we propose a generic performance evaluation methodology. The method uses the Design of Experiment (DoE) evaluation method for the assessment of tools, workflows and techniques. The evaluation results generated from this methodology provide a base for selection. Secondly, we designed and implemented a big monitoring data analysis architecture to provide advanced analytics such as workload forecasting and pattern matching. The architecture offers these services in an available and scalable environment. We implement our design using distributed tools such as Apache Solr, Apache Hadoop and Apache Spark. We also assessed the performance aspects (i.e. Latency and Fault-tolerance) of the architecture design using the proposed evaluation method.