
Molecular Connections
http://molecularconnections.comJobs at Molecular Connections
PROFESSIONAL SUMMARY:
- 3+ years of Extensive programming experience in developing web based applications and Client-Server technologies using Ruby, Rails, RESTful Rails, Hibernate, JavaScript, AJAX, CSS, HTML/DHTML, XML, Java, J2EE, JRuby
- Experience in the designing of various Software Systems using Cucumber and RSpec
- Strong debugging and problem solving skills with excellent understanding of system development methodologies, techniques and tools
- Experience with Agile development methodology, Extreme Programming XP and SCRUM
- Experience in MVC Model View Controller architecture and frameworks
- Working knowledge of Amazon services.
- Extensive working experience with SQL, PL/SQL and Oracle applications
- Experience in the web application servers like Apache, Mongrel
- Experience in XML technologies
- Programming experience in Ruby using tools like Eclipse, NetBeans IDE
- Experience in SOA architecture with web services using SOAP, WSDL, UDDI and XML
- Experience in Oracle, PostgreSQL, DB2, MySQL, SQL Server Databases
- Experience in implementing Design Patterns like MVC
- Involved in gathering user requirements, system analysis, design, development, testing and implementation
- Exceptional ability to Quickly master new concepts and capable of working in-group as well as independently with excellent communication skills
- Strong knowledge using the CRUD , create, read, update and delete methodology
TECHNICAL SKILLS:
- Languages: Ruby, SQL, PL/SQL, C, C , VB, ASP, Java, Unix Shell Script
- Tools IDE: Eclipse, NetBeans IDE, PL/SQL Developer, Git, Sub Version
- RoR Tools API: Cucumber, RSpec
- Internet: Java Script, AJAX, HTML, DHTML, CSS, XML
- Databases: Oracle, PL/SQL, MySQL, DB2, SQL Server 2005, PostgreSQL
- Methodologies: Multithreading, Design Patterns, SOA, SDLC
Responsibilities:
- Interacted with business team and gathered requirements
- Prepare Design Specifications
- Involved in Status Meetings and suggested new enhancements to the existing application
- Implemented RESTful authentication plug-in for authentication and login system
- Used RHTML, cascading style sheets CSS , RJS to describe the web pages
- Implemented the presentation using HTML and a well-defined API interface to allow access to the application services layer
- Input validations were done using Rails Validation functions
- Followed Agile software development methodology and SCRUM
- Experience in the design and development of a service oriented architecture SOA on which all future sales demos and reference implementations would be built
- Involved in implementing user mailing module using ActionMailer
- Designed the SQL Server Database, and Wrote Stored Procedures and Triggers for effective Data Processing and performance
- Effective usage of Design patterns namely Observer, Singleton and Factory Method
- Used GIT for Version Controlling

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Responsibility :
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
We are looking to fill the role of Kubernetes engineer. To join our growing team, please review the list of responsibilities and qualifications.
Kubernetes Engineer Responsibilities
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
We are looking to fill the role of AWS devops engineer . To join our growing team, please review the list of responsibilities and qualifications.
Responsibilities:
- Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS )
- Balance hardware, network, and software layers to arrive at a scalable and maintainable solution that meets requirements for uptime, performance, and functionality
- Monitor server applications and use tools and log files to troubleshoot and resolve problems
- Maintain 99.99% availability of the web and integration services
- Anticipate, identify, mitigate, and resolve issues relating to client facing infrastructure
- Monitor, analyse, and predict trends for system performance, capacity, efficiency, and reliability and recommend enhancements in order to better meet client SLAs and standards
- Research and recommend innovative and automated approaches for system administration and DevOps tasks
- Deploy and decommission client environments for multi and single tenant hosted applications following and updating as needed established processes and procedures
- Follow and develop CPA change control processes for modifications to systems and associated components
Practice configuration management, including maintenance of component inventory and related documentation per company policies and procedures.
Qualifications :
- Git/GitHub version control tools
- Linux and/or Windows Virtualisation (VMWare, Xen, KVM, Virtual Box )
- Cloud computing (AWS, Google App Engine, Rackspace Cloud)
- Application Servers, servlet containers and web servers (WebSphere, Tomcat)
- Bachelors / Masters Degree - 2+ years experience in software development
- Must have experience with AWS VPC networking and security

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Mentoring teams to carry forward recognised and optimised solutions, building conducive environments for knowledge transfer and maintaining standards in terms of deliveries form a major part of responsibilities.
Responsibilities
- Design and develop robust scalable database systems
- Design, build and deploy internal applications to support our native technical ecosystem
- Collaborate with Subject-Matter experts to design data-driven modules for analysis
Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, Engineering or equivalent
- 3-4 years of experience in Java
Skills
- Expertise in object-oriented programming concepts and data structures
- Strong knowledge of working with XMLs/Jsons
- Experience with large-scale distributed storage and database systems (MySQL, MongoDB, GraphDB)
- Good knowledge of indexing/search libraries like Lucene/Solr/Elasticsearch
- Exposure to Spring MVC architecture and RESTful APIs
- Well-versed with Agile methodology of SDLC
- Good to have: Knowledge of standard DevOps skills like Build tools (Maven/gradle), Continuous integration (Jenkins), Version Control (Github/Gitlab)

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Your primary role will be the implementation of the application for either desktop or mobile devices with a focus on performance. With your understanding of AngularJS / React JS best practices you will create modules and components with which you architect the application. You will take existing design and front-end templates and enhance them with CSS animations and implement idiomatic markup. You will team up with the back-end developers to connect to.
Responsibilities
- Building, optimising and maintaining front-end web apps
- Maintaining high performance and compatibility across platforms and devices
- Writing tested, idiomatic, and documented JavaScript, HTML and CSS
- Understanding what is needed for a smooth workflow between yourself, the front-end developers and designers
- Communicate thoroughly with the back-end department to help build a best-practice RESTful API
- Integrate external web services
- APIs using standard methods. A thorough understanding of the components of the platform is essential.
Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, Engineering or equivalent
- 1-3 years of experience in JS based development
Skills
- Programming Expertise In JavaScript/HTML5 / Angular JS v7+ / React JS v16, Express JS, Bootstrap.
- Hands-on Working Experience Of MVC Frameworks Like Angular.js & React.js.
- Excellent Communication Skills.
- Exposure To Sass, Grunt, Node Js.

Similar companies
About the company
We want to create a connected healthcare ecosystem where all the generated medical data is automatically collated and stored at one single location. This will help remove inefficiencies in healthcare delivery, healthcare insurance & financing, population healthcare management, and build new solutions in predictive healthcare and diagnosis.
We are also building a customer facing product where users will be able to look up for information about medical tests including interpretation of results, and other health-related queries.
Jobs
3
About the company
Jobs
4
About the company
Jobs
2
About the company
Jobs
1
About the company
Jobs
2
About the company
Jobs
1
About the company
Aganitha accelerates drug discovery and development with in silico solutions
Jobs
1
About the company
Jobs
1
About the company
Climate Connect Digital (CCD) is a London-based net zero solutions provider, and the independent cleantech software services arm of ReNew Energy Global (NASDAQ: RNW), a major international renewables, storage, and hydrogen developer.
CCD has over a decade of experience in carbon advisory services and software solutions for the climate and clean energy domains.
It has major offices in the UK and India, with a diverse team of 186 people working remotely worldwide, with over 50% of the team in tech and product.
CCD utilises its artificial intelligence (AI) and machine learning (ML) capabilities to build robust solutions for global sustainability problems. These are deployed on various national grids, and are helping to manage 12GW of renewable energy generation, 55GWs of peak power demand, and serve 298 million people.
We are committed to helping our customers succeed in their net zero journeys, to build a more sustainable future for all.
We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:
- Flexible working hours and leave policy
- Learning and development opportunities
- Medical insurance/Term insurance, Gratuity benefits over and above the salaries
- Access to industry and domain thought leaders.
At Climate Connect, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.
We are building a remote-first organisation ingrained in the team ethos. We understand its importance for the success of any next-generation technology company. The team includes passionate and self-driven people with unconventional backgrounds, and we’re seeking a similar spirit with the right potential.
Jobs
4
About the company
Jobs
1