Key Qualifications
Knowledge of Cloud based deployment, security, networking concepts in AWS & Azure. 

  • Knowledge or experience with algorithms, data structures, complexity analysis and software design
  • Knowledge of cloud-based big data automation and orchestration solutions.
  • Interest in designing, analyzing and troubleshooting large-scale distributed systems.
  • Responsible for implementing, automating, deploying, monitoring, application and database troubleshooting and administering our application’s infrastructure
  • Responsible for the availability, performance, infrastructure security and scalability of our product
  • Strong experience in Continuous Integration and Continuous deployment
  • Practical experience on Agile Methodology (e.g. Scrum)
  • Ability to diagnose technical problems, debug, optimize code, and automate routine tasks
  • Keep cost in control and ensure optimized solutions are in place plus there are no leakages.
  • Identify risks, responsive, and works with a sense of urgency plus works within a team or independently
  • Experience building hybrid cloud and on-premises solutions. 
  • Solid experience building and optimizing big data data pipelines, architectures and data sets.


  • Expertise in all common AWS Cloud services like EC2, ECS, S3, ELB, VPC.
  • Knowledgeable in the hardware selection, rectifying Networking, O/S issues, Performance Tuning
  • Manage end-to-end production workloads hosted on Docker and AWS.
  • Rich experience in Data ingestion using Lambda Function, Python, Glue etc., from variety of sources.
  • Well versed with continuous integration and continuous delivery (CI/CD) tools and with automating deployments using Jenkins, Train, Windeploy, Kubernetes, etc.
  • Experience in cloud automation using tools such as Ansible & Terraform.
  • Expertise in object-oriented programming languages and coding skills in any language preferably Python
  • Proven comfort with web servers such as Apache, NGINX etc
  • Understanding of Business Intelligence and Data Warehousing concepts and methods.
  • Fully conversant with big-data processing approaches and schema-on-read methodologies. Preference for deep understanding of Spark, Databricks and Delta Lake, and applying them to solve data science and machine learning business problems.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (SQL Server, Oracle) & AWS services (RDS, Redshift, EC2, EMR etc.,)
  • Working knowledge of REST and implementation patterns pertaining to Data and Analytics.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores (Kafka, Kinesis, Storm) and ETL orchestration tools such as Airflow.
  • Build, and test an optimal data pipeline architecture (preferably on any cloud environments –AWS experience is a must)
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud-based big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Knowledge and hands-on experience on with monitoring tools like Splunk, IP Soft, Sockeye
  • Understand and implement practices to comply with PHI, GDPR and other emerging data privacy initiatives.
  • Help your coworkers by creating documentation and detailed knowledge sharing for continuous improvement.
  • Maintain applications once they are live by measuring and monitoring availability, latency and overall system health with a focus on business activities and continuously evaluate cost and waste
  • Engage in and improve the whole lifecycle of services from inception and design, through deployment, operation, capacity planning and launch reviews.
  • Scale systems sustainably through mechanisms like automation and evolve systems by pushing for changes that improve reliability and velocity; includes automation for other various operational needs.
  • Troubleshooting infrastructure issues, reviewing log files, updating documentation, and having knowledge base with resolutions
  • Work closely with the application Development team to understand the platform and create tools/utilities to help with production management
  • Work closely with Application Development to ensure that the support team has excellent knowledge of the application set, own and maintain support knowledgebase and documents
  • Use of analytical skills to find trends in the environment and drive out problems.
  • Test and tune network, hardware, and software configurations to maximize performance
  • Taking ownership and managing production requests, questions, issues and perform Root Cause Analysis for outages/incidents


  • 1-3 years of experience in a production environment with a solid software development background and understanding of performance tuning, end-to-end troubleshooting, networking fundamentals and appropriate attention to detail.
  • Bachelors/Masters Degree in Computer Science, Information Systems or related field