What are the Key Responsibilities:
- Design NLP applications
- Select appropriate annotated datasets for Supervised Learning methods
- Use effective text representations to transform natural language into useful features
- Find and implement the right algorithms and tools for NLP tasks
- Develop NLP systems according to requirements
- Train the developed model and run evaluation experiments
- Perform statistical analysis of results and refine models
- Extend ML libraries and frameworks to apply in NLP tasks
- Remain updated in the rapidly changing field of machine learning
What are we looking for:
- Proven experience as an NLP Engineer or similar role
- Understanding of NLP techniques for text representation, semantic extraction techniques, data structures, and modeling
- Ability to effectively design software architecture
- Deep understanding of text representation techniques (such as n-grams, a bag of words, sentiment analysis etc), statistics and classification algorithms
- Knowledge of Python, Java, and R
- Ability to write robust and testable code
- Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like sci-kit-learn)
- Strong communication skills
- An analytical mind with problem-solving abilities
- Degree in Computer Science, Mathematics, Computational Linguistics, or similar field
About Service Pack
Omni-Channel CX Automation Suite powered by AI. Our vision to transform Customer Experience using Artificial Intelligence.
Job DescriptionPosition: Sr Data Engineer – Databricks & AWS
Experience: 4 - 5 Years
Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are transforming ourselves and rapidly expanding our business.
Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.
One of the top partners of Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik in 2017.
Get to know more about us on our website: http://www.exponentia.ai/ and Life @Exponentia.
· A Data Engineer understands the client requirements and develops and delivers the data engineering solutions as per the scope.
· The role requires good skills in the development of solutions using various services required for data architecture on Databricks Delta Lake, streaming, AWS, ETL Development, and data modeling.
• Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization.
• Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services.
• Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data.
• Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts and demonstrating it to stakeholders.
• Has more than 2 Years of experience in developing data lakes, and datamarts on the Databricks platform.
• Proven skill sets in AWS Data Lake services such as - AWS Glue, S3, Lambda, SNS, IAM, and skills in Spark, Python, and SQL.
• Experience in Pentaho
• Good understanding of developing a data warehouse, data marts etc.
• Has a good understanding of system architectures, and design patterns and should be able to design and develop applications using these principles.
• Good collaboration and communication skills
• Excellent problem-solving skills to be able to structure the right analytical solutions.
• Strong sense of teamwork, ownership, and accountability
• Analytical and conceptual thinking
• Ability to work in a fast-paced environment with tight schedules.
• Good presentation skills with the ability to convey complex ideas to peers and management.
BE / ME / MS/MCA.
Who are we?
We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.
What we are looking for
We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.
What you’ll be doing
First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.
You will work in a product team. Building products and rapidly rolling out new features and fixes.
You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!
Skills you need in order to succeed in this role
Most Important: Integrity of character, diligence and the commitment to do your best
Must Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development
Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing
Self-Learner: You must be extremely hands-on and obsessive about delivering clean code
- Sense of Ownership: Do whatever it takes to meet development timelines
- Experience in creating end to end data pipeline
- Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
- Working experience in Databricks
- Strong in BI/DW/Datalake Architecture, design and ETL
- Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
- Experience in object-oriented programming, data structures, algorithms and software engineering
- Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
- Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
- Working knowledge of server configuration / deployment
- Experience using source control and bug tracking systems,
writing user stories and technical documentation
- Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
- Expertise in creating tables, procedures, functions, triggers, indexes, views, joins and optimization of complex
- Experience with database versioning, backups, restores and
- Expertise in data security and
- Ability to perform database performance tuning queries
Qualification – Any engineering graduate with STRONG programming and logical reasoning skills.
**Minimum years of Experience:2 – 5 years**
Previous experience as a Data Engineer or in a similar role.
Technical expertise with data models, data mining, and segmentation techniques.
**Knowledge of programming languages (e. g. Java and Python).
Hands-on experience with SQL Programming
Hands-on experience with Python Programming
Knowledge of these tools DBT, ADF, Snowflakes, and Databricks would be added advantage for our current project.**
Strong numerical and analytical skills.
Experience in dealing directly with customers and internal sales organizations.
Strong written and verbal communication, including technical writing skills.
Good to have: Hands-on experience in Cloud services.
Knowledge with ML
Data Warehouse builds (DB, SQL, ETL, Reporting Tools like Power BI…)
Do share your profile to gayathrirajagopalan @jmangroup.com
with proven ability to process large datasets and create new data models.
This is an exciting opportunity to join the explosively growing Blockchain fraternity
and work constructively with our in-house industry experts.
We expect you to have hands on working experience with -
- Machine Learning techniques and algorithms, such as k-NN, Naive Bayes, SVM,
Decision Forests, etc.
- Python data science toolkit with Scikit-Learn and Keras/Pytorch
- Prowess with both SQL and NoSQL Databases
- Good, applied statistics skills, such as distributions, statistical testing, regression,
- Excellent scripting and programming skills
- Fundamental principles of the CCAR or CCEL or Basel II Accord or IFRS
- Docker and other PaaS (Platform-as-a-Service) products
- MUST Specialize in Natural Language Processing
Key Roles & Responsibilities
- Develop predictive models, statistical analyses, optimization procedures, monitoring
processes, data quality analyses, and score implementations supporting impairment
- Produce robust documentation to ensure replicability of results and fulfill Antier's
- Superlative communication skills to represent us before stakeholders
- Leadership skills to build, expand and drive a new process
Academic & other Qualifications
- Postgraduate university degree in the quantitative discipline required (ex. Statistics,
Operations Research, Economics, Computer Science). Graduate studies.
- An ability to produce reports and interrogate systems to produce analysis and
- Proficiency with analytical software Python or SAS or R, SQL tools.
- Prior experience in banking products such as Mutual Funds will add value
Big Data Engineer/Data Engineer
What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge
• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries
These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation
At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment
What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily
Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!
To read more about us click here:
About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.
• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.
Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.
Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP) • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure
• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams
Preferred Immediate Joiners
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
We at Wow Labz are always striving to look for exciting problems to solve. Whether we’re creating new products or helping a small startup extend its reach, we build from our heart. We’re entrepreneurial and we love new ideas. Fun culture with a team that cares about your development and growth.
What are we looking for?
We are looking for an expert in machine learning to help us extract maximum value from our data. You will be leading all the processes from data collection, cleaning, and preprocessing, to training models and deploying them to production. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Role & Responsibilities:
- Identify valuable data sources and automate collection processes
- Study and transform data science prototypes
- Research and Implement appropriate ML algorithms and tools
- Develop machine learning applications according to requirements
- Extend existing ML libraries and frameworks
- Cross-validate models to ensure their generalizability
- Present information using data visualization techniques
- Propose solutions and strategies to business challenges
- Collaborate with engineering and product development teams
- Guide and mentor the respective teams
Desired Skills and Experience:
- Proven experience as a Machine Learning Engineer or similar role
- Demonstrable history of devising and overseeing data-centered projects
- Understanding of data structures, data modeling and software architecture
- Deep knowledge of math, probability, statistics and algorithms
- Experience with cloud platforms like AWS/Azure/GCP
- Knowledge of server configurations and maintenance.
- Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
- Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
- Ability to select hardware to run an ML model with the required latency
- Excellent communication skills
- Ability to work in a team
- Outstanding analytical and problem-solving skills
- Inclination towards Mathematics and statistics to understand the algorithms at a deeper level
- Strong OOPs concepts (python preferable)
- Hands on experience with Flask or Django
- Ability to learn latest deployed models and understand their core architecture to gain breadth of expertise
Persona of the kind of people who would be a culture fit:
- You are curious and aware of the latest tech trends
- You are self-driven
- You get a kick out of leading a solution towards its completion.
- You have the capacity to foster a healthy, stimulating work environment that frequently harnesses teamwork
- You are fun to hang out with!
As an experienced Data Scientist you’ll join a team of data scientists, analysts, and software engineers
working to push the boundaries of data science in health care. We like to experiment, iterate, and
innovate with technology, from developing new algorithms specific to health care’s challenges, to
bringing the latest machine learning practices and applications developed in other industries into the
health care world. We know that algorithms are only valuable when powered by the right data, so we
focus on fully understanding the problems we need to solve, and truly understanding the data behind
them before launching into solutions – ensuring that the solutions we do land on are impactful and
• Research, conceptualize, and implement analytical approaches and predictive modeling to
evaluate scenarios, predict utilization and clinical outcomes, and recommend actions to impact
• Manage and execute on the entire model development process, including scope definition,
hypothesis formation, data cleaning and preparation, feature selection, model implementation
in production, validation and iteration, using multiple data sources.
• Provide guidance on necessary data and software infrastructure capabilities to deliver a scalable
solution across partners and support the implementation of the team’s algorithms and models
• Contribute to the development and publication in major journals, conferences showcasing
leadership in healthcare data science.
• Work closely and collaborate with Data Scientists, Machine Learning engineers, IT teams and
Business stakeholders spread out across various locations in US and India to achieve business
• Provide guidance to other Data Scientist and Machine Learning Engineers
- Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
- Verifying data quality, and/or ensuring it via data cleaning.
- Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
- To design and develop Machine Learning systems and schemes.
- To perform statistical analysis and fine-tune models using test results.
- To train and retrain ML systems and models as and when necessary.
- To deploy ML models in production and maintain the cost of cloud infrastructure.
- To develop Machine Learning apps according to client and data scientist requirements.
- To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.
- Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase.
- Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
- Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark, Pandas, Numpy and related libraries.
- Expert in visualising and manipulating complex datasets.
- Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
- Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
- Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
- Big data Technologies such as Hadoop stack and Spark.
- Basic use of clouds (VM’s example EC2).
- Brownie points for Kubernetes and Task Queues.
- Strong written and verbal communications.
- Experience working in an Agile environment.
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.
- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.
- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.
- Technical expertise with data models, database design and development, data mining and segmentation techniques
- Proven success in a collaborative, team-oriented environment
- Working experience with geospatial data will be a plus.