

- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.

About Molecular Connections
About
Connect with the team
Similar jobs
Job Description
As an IT Sales Intern, you will be an integral part of our sales team, focusing on field
sales activities. This internship is designed to provide you with hands-on experience in
IT sales, client engagement, and sales strategy. You will have the
opportunity to learn from experienced professionals and contribute to our growth in a
dynamic industry.
Responsibilities
- Identify potential clients through research, networking, and market analysis to
- create a pipeline of leads.
- Develop and maintain relationships with prospective and existing clients by
- providing information on services.
- Develop a deep understanding of our cybersecurity services to effectively
- communicate their value propositions to clients.
- Prepare and deliver compelling sales presentations to showcase our IT solutions
- and address client needs.
- Assist in preparing sales proposals, quotes, and contracts while collaborating
- with the sales team.
- Perform various tasks, such as organizing sales documents, maintaining sales
- records, and scheduling appointments.
- Analyze sales data to identify trends, opportunities, and areas for improvement.
- This may involve creating reports and presentations to share insights with the
- sales team.
- Actively participate in training sessions and workshops to enhance your
- knowledge of IT services.
Candidate Skills
- Strong communication and interpersonal skills.
- Eagerness to learn and adapt in a fast-paced environment.
- Ability to work independently and as part of a team.
- Basic understanding of IT concepts and technology trends (preferred).
- Previous sales or customer service experience (preferred)..
Qualifications
- Candidate must have a degree in the field of business administration.
- Any vendor natural business development training and certification will be preferred.
Benefits
- Hands-on experience in the cybersecurity industry.
- Training and mentorship from experienced professionals.
- Opportunity to develop valuable sales and business development skills
Salary Package: Performance based
Job Type: Full time (In office)
Start Date: Immediate
Documents Requirement: Previous Internship Experience Letter (if any), ID Proof, Internship job offer letter (if any), Updated CV, Last qualification certificate.
Sr. Data Engineer
Company Profile:
Bigfoot Retail Solutions [Shiprocket] is a logistics platform which connects Indian eCommerce SMBs with logistics players to enable end to end solutions.
Our innovative data backed platform drives logistics efficiency, helps reduce cost, increases sales throughput by reducing RTO and improves post order customer engagement and experience.
Our vision is to power all logistics for the direct commerce market in India
including first mile, linehaul, last mile, warehousing, cross border and O2O.
Position: Sr.Data Engineer
Team : Business Intelligence
Location: New Delhi
Job Description:
We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
Key Responsibilities:
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centres and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Roles and Responsibilities
· Supports customers by providing helpful information,
· Overseeing the customer service process.
· Resolving customer complaints brought to your attention.
· Answering questions, and responding to complaints.
· Maintaining a pleasant work environment for your team
· Conducting quality assurance surveys with customers and providing
feedback to the staff.
· Possessing excellent product knowledge to enhance customer suppor
· Handling Customer concerns and complaints in a timely manner
· Informing Customers of upcoming promotions, offers, updates and
much more
· Courier Tracking, contacting courier people, give a feedback to
customer
· Semi Voice process


To Develop application using new stack technologies and delivered effectively, efficiently, on-time, in-specification and in a cost-effective manner.
The full stack Java Developer with 5-7 years of experience and should be familiar with Java, Python, Java Script, Spring Boot, Continuous Integration, branching and merging (GIT), pair programming, code reviews, feature toggles, blue- green deployments, TDD and unit testing, agile methodologies (Scrum/XP), Design Patterns, Rest API, Good understanding of Networking and Security. Familiarity with RDBMS, preferably MariaDB, MySQL, NoSQL. This Development Engineer role will play a hands-on role to develop quality applications within the desired timeframes.
|
Hands on exp in design , Build and Trouble Shoot genesys contact Center Solutions
Hands on exp working Audio Codes SBC
Exp working in automation - scripting tools like Python , Jenkins, Ansible ,Perl
Exp in SIP Trunks
Exp working in Cisco CUCM, Unity Connections and Voice Gateway routers


Job Responsibilities
· Responsibilities for this position include but are not limited to, the following.
· Development experience 3-6 years
· Experience working with Azure cloud-hosted web applications and technologies.
· Design and develop back-end microservices and REST APIs for connected devices, web applications, and mobile applications.
· Stay up to date on relevant technologies, plug into user groups, and understand trends and opportunities that ensure we are using the best techniques and tools.
- Meeting with the software development team to define the scope and scale of software projects.
- Designing software system architecture.
- Completing data structures and design patterns.
- Designing and implementing scalable web services, applications, and APIs.
- Developing and maintaining internal software tools.
- Writing low-level and high-level code.
- Troubleshooting and bug fixing.
- Identifying bottlenecks and improving software efficiency.
- Collaborating with the design team on developing micro-services.
- Writing technical documents.
- Be an active professional in continuous learning resulting in enhancement in organizational objectives.
- Provide technical support to all internal teams and customers as it relates to the product.
Requirements:
- Bachelor’s degree in computer engineering or computer science.
- Previous experience as a full stack engineer and IoT Products.
- Advanced knowledge of front-end languages including HTML5, CSS, JavaScript, Angular, React.
- Proficient in back-end languages including Nodejs and basic knowledge of Java, C#.
- Experience with cloud computing APIs and Cloud Providers such as Azure or AWS.
· Working knowledge of database systems (Cassandra, CosmosDB, Redis, PostgreSQL)
· Messaging systems (RabbitMQ, MQTT, Kafka)
· Cloud-based distributed application scaling & data processing in the cloud
· Agile / Scrum methodology
- Advanced troubleshooting skills.
- Familiarity with JavaScript frameworks.
- Good communication skills.
High-level project management skills.

Responsibilities:
- Meeting with technology managers to determine application and website requirements.
- Upgrading existing .NET websites and applications.
- Analyzing system requirements and delegating development tasks.
- Developing technical specifications.
- Writing scalable code for .NET software applications.
- Reviewing and debugging .NET applications.
- Providing support for junior developers.
- Deploying functional websites, programs, and applications.
- Drafting software and application operating procedures.
- Training junior staff.
Requirements:
- Bachelor’s degree in computer science or information technology.
- Previous experience as a .NET developer.
- High-level managerial skills.
- Knowledge of .NET languages including C#, Visual Basic.NET, C++/CLI, J#, and JScript.NET.
- Proficient with front-end development languages including JavaScript, HTML5, and CSS.
- Ability to project manage.
- Excellent problem-solving skills.
- Good verbal and written communication skills.


