We at opexAI strive to make business strategies and provide effective solutions to your complex business problems using AI, machine learning and cognitive computing approaches. With a 40+ years experience in analytics and a workforce of highly skilled and certified consultants, we provide a realistic approach to help you chart an optimal path to success.
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
- Modeling complex problems, discovering insights, and identifying opportunities through the use of statistical, algorithmic, mining, and visualization techniques
- Experience working with business understanding the requirement, creating the problem statement, and building scalable and dependable Analytical solutions
- Must have hands-on and strong experience in Python
- Broad knowledge of fundamentals and state-of-the-art in NLP and machine learning
- Strong analytical & algorithm development skills
- Deep knowledge of techniques such as Linear Regression, gradient descent, Logistic Regression, Forecasting, Cluster analysis, Decision trees, Linear Optimization, Text Mining, etc
- Ability to collaborate across teams and strong interpersonal skills
- Sound theoretical knowledge in ML algorithm and their application
- Hands-on experience in statistical modeling tools such as R, Python, and SQL
- Hands-on experience in Machine learning/data science
- Strong knowledge of statistics
- Experience in advanced analytics / Statistical techniques – Regression, Decision trees, Ensemble machine learning algorithms, etc
- Experience in Natural Language Processing & Deep Learning techniques
- Pandas, NLTK, Scikit-learn, SpaCy, Tensorflow
- Architect and implement modules for ingesting, storing and manipulating large data sets for a variety of cybersecurity use-cases.
- Write code to provide backend support for data-driven UI widgets, web dashboards, workflows, search and API connectors.
- Design and implement high performance APIs between our frontend and backend components, and between different backend components.
- Build production quality solutions that balance complexity and performance
- Participate in the engineering life-cycle at Balbix, including designing high quality UI components, writing production code, conducting code reviews and working alongside our backend infrastructure and reliability teams
- Stay current on the ever-evolving technology landscape of web based UIs and recommend new systems for incorporation in our technology stack.
- Product-focused and passionate about building truly usable systems
- Collaborative and comfortable working with across teams including data engineering, front end, product management, and DevOps
- Responsible and like to take ownership of challenging problems
- A good communicator, and facilitate teamwork via good documentation practices
- Comfortable with ambiguity and able to iterate quickly in response to an evolving understanding of customer needs
- Curious about the world and your profession, and a constant learner
- BS in Computer Science or related field
- Atleast 3+ years of experience in the backend web stack (Node.js, MongoDB, Redis, Elastic Search, Postgres, Java, Python, Docker, Kubernetes, etc.)
- SQL, no-SQL database experience
- Experience building API (development experience using GraphQL is a plus)
- Familiarity with issues of web performance, availability, scalability, reliability, and maintainability
Data Scientist - Product Development
Employment Type: Full Time, Permanent
Experience: 3-5 Years as a Full Time Data Scientist
We are looking for an exceptional Data Scientist who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes (TB) of daily data for various use cases.
Location: Pune (Currently remote up till pandemic, later you need to relocate)
About the Organization: A funded product development company, headquarter in Singapore and offices in Australia, United States, Germany, United Kingdom and India. You will gain work experience in a global environment. Qualifications:
- 3+ years relevant working experience
- Master / Bachelor’s in computer science or engineering
- Working knowledge of Python, Spark / Pyspark, SQL
- Experience working with large-scale data
- Experience in data manipulation, analytics, visualization, model building, model deployment
- Proficiency of various ML algorithms for supervised and unsupervised learning
- Experience working in Agile/Lean model
- Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Tensorflow, PyTorch, Keras, GCP ML Stack
- Exposure to MLOps tools such as MLflow, Airflow
- Exposure to modern Big Data tech such as Cassandra/Scylla, Snowflake, Kafka, Ceph, Hadoop
- Exposure to IAAS platforms such as AWS, GCP, Azure
- Experience with Java and Golang is a plus
- Experience with BI toolkit such as Superset, Tableau, Quicksight, etc is a plus
****** Looking for someone who can join immediately / within a month and carries experience with product development companies and dealt with streaming data. Experience working in a product development team is desirable. AWS experience is a must. Strong experience in Python and its related library is required.
Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)
Primary Location : India-Pune, Hyderabad
Experience : 7 - 12 Years
Management Level: 7
Joining Time: Immediate Joiners are preferred
- Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
- Align architecture with business requirements and stabilizing the developed solution
- Ability to build prototypes to demonstrate the technical feasibility of your vision
- Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
- To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
- Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
- Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
- Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
- Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
- Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
- Deployment sophisticated analytics program of code using any of cloud application.
Perks and Benefits we Provide!
- Working with Highly Technical and Passionate, mission-driven people
- Subsidized Meals & Snacks
- Flexible Schedule
- Approachable leadership
- Access to various learning tools and programs
- Pet Friendly
- Certification Reimbursement Policy
- Check out more about us on our website below!
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
- Required to work individually or as part of a team on data science projects and work closely with lines of business to understand business problems and translate them into identifiable machine learning problems which can be delivered as technical solutions.
- Build quick prototypes to check feasibility and value to the business.
- Design, training, and deploying neural networks for computer vision and machine learning-related problems.
- Perform various complex activities related to statistical/machine learning.
- Coordinate with business teams to provide analytical support for developing, evaluating, implementing, monitoring, and executing models.
- Collaborate with technology teams to deploy the models to production.
- 2+ years of experience in solving complex business problems using machine learning.
- Understanding and modeling experience in supervised, unsupervised, and deep learning models; hands-on knowledge of data wrangling, data cleaning/ preparation, dimensionality reduction is required.
- Experience in Computer Vision/Image Processing/Pattern Recognition, Machine Learning, Deep Learning, or Artificial Intelligence.
- Understanding of Deep Learning Architectures like InceptionNet, VGGNet, FaceNet, YOLO, SSD, RCNN, MASK Rcnn, ResNet.
- Experience with one or more deep learning frameworks e.g., TensorFlow, PyTorch.
- Knowledge of vector algebra, statistical and probabilistic modeling is desirable.
- Proficiency in programming skills involving Python, C/C++, and Python Data Science Stack (NumPy, SciPy, Pandas, Scikit-learn, Jupyter, IPython).
- Experience working with Amazon SageMaker or Azure ML Studio for deployments is a plus.
- Experience in data visualization software such as Tableau, ELK, etc is a plus.
- Strong analytical, critical thinking, and problem-solving skills.
- B.E/ B.Tech./ M. E/ M. Tech in Computer Science, Applied Mathematics, Statistics, Data Science, or related Engineering field.
- Minimum 60% in Graduation or Post-Graduation
- Great interpersonal and communication skills
We are Still Hiring!!!
This email is regarding open positions for Data Engineer Professionals with our organisation CRMNext.
In case, you find the company profile and JD matching your aspirations and your profile matches the required Skill and qualifications criteria, please share your updated resume with response to questions.
We shall reach you back for scheduling the interviews post this.
Driven by a Passion for Excellence
Acidaes Solutions Pvt. Ltd. is a fast growing specialist Customer Relationship Management (CRM) product IT company providing ultra-scalable CRM solutions. It offers CRMNEXT, our flagship and award winning CRM platform to leading enterprises both on cloud as well as on-premise models. We consistently focus on using the state of art technology solutions to provide leading product capabilities to our customers.
CRMNEXT is a global cloud CRM solution provider credited with the world's largest installation ever. From Fortune 500 to start-ups, businesses across nine verticals have built profitable customer relationships via CRMNEXT. A pioneer of Digital CRM for some of the largest enterprises across Asia-Pacific, CRMNEXT's customers include global brands like Pfizer, HDFC Bank, ICICI Bank, Axis Bank, Tata AIA, Reliance, National Bank of Oman, Pavers England etc. It was recently lauded in the Gartner Magic Quadrant 2015 for Lead management, Sales Force Automation and Customer Engagement. For more information, visit us at www.crmnext.com
B.E./B.Tech /M.E./ M.Tech/ MCA with (Bsc.IT/Bsc. Comp/BCA is mandatory)
60% in Xth, XIIth /diploma, B.E./B.Tech/M.E/M.Tech/ MCA with (Bsc.IT/Bsc. Comp/BCA is mandatory)
All education should be regular (Please Note - Degrees through Distance learning/correspondence will not consider)
Exp level- 2 to 5 yrs
Technical expertise required:
1)Analytics experience in the BFSI domain is must
2) Hands on technical experience in python, big data and AI
3) Understanding of datamodels and analytical concepts
4) Client engagement :
Should have run in past client engagements for Big data/ AI projects starting from requirement gathering, to planning development sprints, and delivery
Should have experience in deploying big data and AI projects
First hand experience on data governance, data quality, customer data models, industry data models
Aware of SDLC.
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.
Data [email protected]
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.
How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!
What do we offer?
● Some of the most challenging research problems in NLP and Computer Vision. Huge text and image
datasets that you can play with!
● Ability to see the impact of your work and the value you're adding to our customers almost immediately.
● Opportunity to work on different problems and explore a wide variety of tools to figure out what really
● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible
● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
● Last but not the least, competitive salary packages and fast paced growth opportunities.
Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem.
You are also expected to develop capabilities that open up new business productization opportunities.
We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision.
If you have 4+ years of relevant experience with a Master's degree (PhD preferred), you will be considered for a senior role.
Key problem areas
● Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
● Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
● Document clustering, attribute tagging, data normalization, classification, summarization, sentiment
● Image based clustering and classification, segmentation, object detection, extracting text from images,
generative models, recommender systems.
● Ensemble approaches for all the above problems using multiple text and image based techniques.
Relevant set of skills
● Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus,
optimization, algorithms and complexity.
● Background in one or more of information retrieval, data mining, statistical techniques, natural language
processing, and computer vision.
● Excellent coding skills on multiple programming languages with experience building production grade
systems. Prior experience with Python is a bonus.
● Experience building and shipping machine learning models that solve real world engineering problems.
Prior experience with deep learning is a bonus.
● Experience building robust clustering and classification models on unstructured data (text, images, etc).
Experience working with Retail domain data is a bonus.
● Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
● Experience working with a variety of tools and libraries for machine learning and visualization, including
numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
● Use the command line like a pro. Be proficient in Git and other essential software development tools.
● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
● It's a huge bonus if you have some personal projects (including open source contributions) that you work
on during your spare time. Show off some of your projects you have hosted on GitHub.
Role and responsibilities
● Understand the business problems we are solving. Build data science capability that align with our product strategy.
● Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
● Build robust clustering and classification models in an iterative manner that can be used in production.
● Constantly think scale, think automation. Measure everything. Optimize proactively.
● Take end to end ownership of the projects you are working on. Work with minimal supervision.
● Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
● Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
● Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
● Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.