Molecular Connections
http://molecularconnections.comJobs at Molecular Connections
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Responsibility :
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
We are looking to fill the role of Kubernetes engineer. To join our growing team, please review the list of responsibilities and qualifications.
Kubernetes Engineer Responsibilities
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
We are looking to fill the role of AWS devops engineer . To join our growing team, please review the list of responsibilities and qualifications.
Responsibilities:
- Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS )
- Balance hardware, network, and software layers to arrive at a scalable and maintainable solution that meets requirements for uptime, performance, and functionality
- Monitor server applications and use tools and log files to troubleshoot and resolve problems
- Maintain 99.99% availability of the web and integration services
- Anticipate, identify, mitigate, and resolve issues relating to client facing infrastructure
- Monitor, analyse, and predict trends for system performance, capacity, efficiency, and reliability and recommend enhancements in order to better meet client SLAs and standards
- Research and recommend innovative and automated approaches for system administration and DevOps tasks
- Deploy and decommission client environments for multi and single tenant hosted applications following and updating as needed established processes and procedures
- Follow and develop CPA change control processes for modifications to systems and associated components
Practice configuration management, including maintenance of component inventory and related documentation per company policies and procedures.
Qualifications :
- Git/GitHub version control tools
- Linux and/or Windows Virtualisation (VMWare, Xen, KVM, Virtual Box )
- Cloud computing (AWS, Google App Engine, Rackspace Cloud)
- Application Servers, servlet containers and web servers (WebSphere, Tomcat)
- Bachelors / Masters Degree - 2+ years experience in software development
- Must have experience with AWS VPC networking and security
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Mentoring teams to carry forward recognised and optimised solutions, building conducive environments for knowledge transfer and maintaining standards in terms of deliveries form a major part of responsibilities.
Responsibilities
- Design and develop robust scalable database systems
- Design, build and deploy internal applications to support our native technical ecosystem
- Collaborate with Subject-Matter experts to design data-driven modules for analysis
Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, Engineering or equivalent
- 3-4 years of experience in Java
Skills
- Expertise in object-oriented programming concepts and data structures
- Strong knowledge of working with XMLs/Jsons
- Experience with large-scale distributed storage and database systems (MySQL, MongoDB, GraphDB)
- Good knowledge of indexing/search libraries like Lucene/Solr/Elasticsearch
- Exposure to Spring MVC architecture and RESTful APIs
- Well-versed with Agile methodology of SDLC
- Good to have: Knowledge of standard DevOps skills like Build tools (Maven/gradle), Continuous integration (Jenkins), Version Control (Github/Gitlab)
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Your primary role will be the implementation of the application for either desktop or mobile devices with a focus on performance. With your understanding of AngularJS / React JS best practices you will create modules and components with which you architect the application. You will take existing design and front-end templates and enhance them with CSS animations and implement idiomatic markup. You will team up with the back-end developers to connect to.
Responsibilities
- Building, optimising and maintaining front-end web apps
- Maintaining high performance and compatibility across platforms and devices
- Writing tested, idiomatic, and documented JavaScript, HTML and CSS
- Understanding what is needed for a smooth workflow between yourself, the front-end developers and designers
- Communicate thoroughly with the back-end department to help build a best-practice RESTful API
- Integrate external web services
- APIs using standard methods. A thorough understanding of the components of the platform is essential.
Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, Engineering or equivalent
- 1-3 years of experience in JS based development
Skills
- Programming Expertise In JavaScript/HTML5 / Angular JS v7+ / React JS v16, Express JS, Bootstrap.
- Hands-on Working Experience Of MVC Frameworks Like Angular.js & React.js.
- Excellent Communication Skills.
- Exposure To Sass, Grunt, Node Js.
Similar companies
ConnectedH
About the company
We want to create a connected healthcare ecosystem where all the generated medical data is automatically collated and stored at one single location. This will help remove inefficiencies in healthcare delivery, healthcare insurance & financing, population healthcare management, and build new solutions in predictive healthcare and diagnosis.
We are also building a customer facing product where users will be able to look up for information about medical tests including interpretation of results, and other health-related queries.
Jobs
3
Live Connections
About the company
Jobs
2
Elucidata Corporation
About the company
Jobs
1
Datazymes
About the company
Jobs
0
Excelra Knowledge Solutions
About the company
Jobs
0
CDM Connect
About the company
Jobs
0
SciLynk
About the company
Jobs
0
Climate Connect
About the company
Jobs
0
Covalense Technologies Pvt ltd
About the company
Jobs
1
Covalense Digital Solutions Pvt
About the company
Jobs
0