Responsibilities for Data Scientist/ NLP Engineer
Work with customers to identify opportunities for leveraging their data to drive business
solutions.
• Develop custom data models and algorithms to apply to data sets.
• Basic data cleaning and annotation for any incoming raw data.
• Use predictive modeling to increase and optimize customer experiences, revenue
generation, ad targeting and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Deployment of ML model in production.
Qualifications for Junior Data Scientist/ NLP Engineer
• BS, MS in Computer Science, Engineering, or related discipline.
• 3+ Years of experience in Data Science/Machine Learning.
• Experience with programming language Python.
• Familiar with at least one database query language, such as SQL
• Knowledge of Text Classification & Clustering, Question Answering & Query Understanding,
Search Indexing & Fuzzy Matching.
• Excellent written and verbal communication skills for coordinating acrossteams.
• Willing to learn and master new technologies and techniques.
• Knowledge and experience in statistical and data mining techniques:
GLM/Regression, Random Forest, Boosting, Trees, text mining, NLP, etc.
• Experience with chatbots would be bonus but not required
About Simplifai Cognitive Solutions Pvt Ltd
The growth of https://www.simplifai.ai/en/artificial-intelligence/">artificial intelligence accelerated these thoughts. Machine learning made it possible for the projects to get smaller, the solutions smarter, and the automation more efficient. Bård and Erik wanted to bring AI to the people, and they wanted to do it simply.
Simplifai was founded in 2017 and has grown considerably since then. Today we work globally and have offices in Norway, India, and Ukraine. We have built a global, diverse organization that is well prepared for further growth.
Similar jobs
Our client combines Adtech and Martech platform strategy with data science & data engineering expertise, helping our clients make advertising work better for people.
- Act as primary day-to-day contact on analytics to agency-client leads
- Develop bespoke analytics proposals for presentation to agencies & clients, for delivery within the teams
- Ensure delivery of projects and services across the analytics team meets our stakeholder requirements (time, quality, cost)
- Hands on platforms to perform data pre-processing that involves data transformation as well as data cleaning
- Ensure data quality and integrity
- Interpret and analyse data problems
- Build analytic systems and predictive models
- Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements
- Min 6 - 7 years’ experience working in Data Science
- Prior experience as a Data Scientist within a digital media is desirable
- Solid understanding of machine learning
- A degree in a quantitative field (e.g. economics, computer science, mathematics, statistics, engineering, physics, etc.)
- Experience with SQL/ Big Query/GMP tech stack / Clean rooms such as ADH
- A knack for statistical analysis and predictive modelling
- Good knowledge of R, Python
- Experience with SQL, MYSQL, PostgreSQL databases
- Knowledge of data management and visualization techniques
- Hands-on experience on BI/Visual Analytics Tools like PowerBI or Tableau or Data Studio
- Evidence of technical comfort and good understanding of internet functionality desirable
- Analytical pedigree - evidence of having approached problems from a mathematical perspective and working through to a solution in a logical way
- Proactive and results-oriented
- A positive, can-do attitude with a thirst to continually learn new things
- An ability to work independently and collaboratively with a wide range of teams
- Excellent communication skills, both written and oral
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
- 3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
- Programming skills in Python, knowledge in Statistics.
- Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
- Experience in the financial domain and familiarity with financial models are highly desirable.
- Experience in image processing and computer vision.
- Experience working with building data pipelines.
- Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
- Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
- Should have experience in using larger data sets using Postgres Database.
Job Location: Chennai
Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery
•
Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
We are hiring for software engineer Minimum 1 year experience engineering graduate from Mechanical/EEE/EC/CS stream
- Primary role will be helping our customers to assist in development requirement in image processing
- It will also involves development technical specifications and product description in image processing field.
You will get to work on new and disruptive technologies.
Key Skills:
Familiarization with basic Linux commands *Hands on experience in image processing application development based on Deep Neural Networks, open cv etc
*Experience in working with Python, R, Tensorflow and C/C++
Job Location : Hyderabad
Resumes to be sent to Ogive mail id
Senior Big Data Engineer
Note: Notice Period : 45 days
Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA.
We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure.
It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges.
Key Qualifications
· 5+ years of experience working with Java and Spring technologies
· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations
· Knowledge of microservices architecture is plus
· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra
· Experience with Kafka or any streaming tools
· Knowledge of Scala would be preferable
· Experience with agile application development
· Exposure of any Cloud Technologies including containers and Kubernetes
· Demonstrated experience of performing DevOps for platforms
· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity
· Exposure to Graph databases
· Passion for learning new technologies and the ability to do so quickly
· A Bachelor's degree in a computer-related field or equivalent professional experience is required
Key Responsibilities
· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture
· Design and develop the big data-focused micro-Services
· Involve in big data infrastructure, distributed systems, data modeling, and query processing
· Build software with cutting-edge technologies on cloud
· Willing to learn new technologies and research-orientated projects
· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed
-
Owns the end to end implementation of the assigned data processing components/product features i.e. design, development, dep
loyment, and testing of the data processing components and associated flows conforming to best coding practices -
Creation and optimization of data engineering pipelines for analytics projects.
-
Support data and cloud transformation initiatives
-
Contribute to our cloud strategy based on prior experience
-
Independently work with all stakeholders across the organization to deliver enhanced functionalities
-
Create and maintain automated ETL processes with a special focus on data flow, error recovery, and exception handling and reporting
-
Gather and understand data requirements, work in the team to achieve high-quality data ingestion and build systems that can process the data, transform the data
-
Be able to comprehend the application of database index and transactions
-
Involve in the design and development of a Big Data predictive analytics SaaS-based customer data platform using object-oriented analysis
, design and programming skills, and design patterns -
Implement ETL workflows for data matching, data cleansing, data integration, and management
-
Maintain existing data pipelines, and develop new data pipeline using big data technologies
-
Responsible for leading the effort of continuously improving reliability, scalability, and stability of microservices and platform
Role and Responsibilities
- Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
- Build robust RESTful APIs that serve data and insights to DataWeave and other products
- Design user interaction workflows on our products and integrating them with data APIs
- Help stabilize and scale our existing systems. Help design the next generation systems.
- Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
- Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
- Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.
Skills and Requirements
- 8- 15 years of experience building and scaling APIs and web applications.
- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
- Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
- Working knowledge linux server administration as well as the AWS ecosystem is desirable.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.