11+ SOFA Statistics Jobs in Chennai | SOFA Statistics Job openings in Chennai
Apply to 11+ SOFA Statistics Jobs in Chennai on CutShort.io. Explore the latest SOFA Statistics Job opportunities across top companies like Google, Amazon & Adobe.
at LatentView Analytics
- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred
Position Overview: We are seeking a talented Data Engineer with expertise in Power BI to join our team. The ideal candidate will be responsible for designing and implementing data pipelines, as well as developing insightful visualizations and reports using Power BI. Additionally, the candidate should have strong skills in Python, data analytics, PySpark, and Databricks. This role requires a blend of technical expertise, analytical thinking, and effective communication skills.
Key Responsibilities:
- Design, develop, and maintain data pipelines and architectures using PySpark and Databricks.
- Implement ETL processes to extract, transform, and load data from various sources into data warehouses or data lakes.
- Collaborate with data analysts and business stakeholders to understand data requirements and translate them into actionable insights.
- Develop interactive dashboards, reports, and visualizations using Power BI to communicate key metrics and trends.
- Optimize and tune data pipelines for performance, scalability, and reliability.
- Monitor and troubleshoot data infrastructure to ensure data quality, integrity, and availability.
- Implement security measures and best practices to protect sensitive data.
- Stay updated with emerging technologies and best practices in data engineering and data visualization.
- Document processes, workflows, and configurations to maintain a comprehensive knowledge base.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or related field. (Master’s degree preferred)
- Proven experience as a Data Engineer with expertise in Power BI, Python, PySpark, and Databricks.
- Strong proficiency in Power BI, including data modeling, DAX calculations, and creating interactive reports and dashboards.
- Solid understanding of data analytics concepts and techniques.
- Experience working with Big Data technologies such as Hadoop, Spark, or Kafka.
- Proficiency in programming languages such as Python and SQL.
- Hands-on experience with cloud platforms like AWS, Azure, or Google Cloud.
- Excellent analytical and problem-solving skills with attention to detail.
- Strong communication and collaboration skills to work effectively with cross-functional teams.
- Ability to work independently and manage multiple tasks simultaneously in a fast-paced environment.
Preferred Qualifications:
- Advanced degree in Computer Science, Engineering, or related field.
- Certifications in Power BI or related technologies.
- Experience with data visualization tools other than Power BI (e.g., Tableau, QlikView).
- Knowledge of machine learning concepts and frameworks.
Ganit has flipped the data science value chain as we do not start with a technique but for us, consumption comes first. With this philosophy, we have successfully scaled from being a small start-up to a 200 resource company with clients in the US, Singapore, Africa, UAE, and India.
We are looking for experienced data enthusiasts who can make the data talk to them.
You will:
- Understand business problems and translate business requirements into technical requirements.
- Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it.
- Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization.
- Gather requirements and communicate findings in the form of a meaningful story with the stakeholders
- Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery adoption.
- Lead and mentor data analysts.
We are looking for someone who has:
- Apart from your love for data and ability to code even while sleeping you would need the following.
- Minimum of 02 years of experience in designing and delivery of data science solutions.
- You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off.
- Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand.
- Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc.
- Bachelors/Masters degree in Engineering/Technology or MBA from Tier-1 B School or MSc. in Statistics or Mathematics
Skillset Required:
- Regression
- Classification
- Predictive Modelling
- Prescriptive Modelling
- Python
- R
- Descriptive Modelling
- Time Series
- Clustering
What is in it for you:
- Be a part of building the biggest brand in Data science.
- An opportunity to be a part of a young and energetic team with a strong pedigree.
- Work on awesome projects across industries and learn from the best in the industry, while growing at a hyper rate.
Please Note:
At Ganit, we are looking for people who love problem solving. You are encouraged to apply even if your experience does not precisely match the job description above. Your passion and skills will stand out and set you apart—especially if your career has taken some extraordinary twists and turns over the years. We welcome diverse perspectives, people who think rigorously and are not afraid to challenge assumptions in a problem. Join us and punch above your weight!
Ganit is an equal opportunity employer and is committed to providing a work environment that is free from harassment and discrimination.
All recruitment, selection procedures and decisions will reflect Ganit’s commitment to providing equal opportunity. All potential candidates will be assessed according to their skills, knowledge, qualifications, and capabilities. No regard will be given to factors such as age, gender, marital status, race, religion, physical impairment, or political opinions.
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
Desirable
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
Role : Senior Customer Scientist
Experience : 6-8 Years
Location : Chennai (Hybrid)
Who are we?
A young, fast-growing AI and big data company, with an ambitious vision to simplify the world’s choices. Our clients are top-tier enterprises in the banking, e-commerce and travel spaces. They use our core AI-based choice engine http://maya.ai/">maya.ai, to deliver personal digital experiences centered around taste. The http://maya.ai/">maya.ai platform now touches over 125M customers globally. You’ll find Crayon Boxes in Chennai and Singapore. But you’ll find Crayons in every corner of the world. Especially where our client projects are – UAE, India, SE Asia and pretty soon the US.
Life in the Crayon Box is a little chaotic, largely dynamic and keeps us on our toes! Crayons are a diverse and passionate bunch. Challenges excite us. Our mission drives us. And good food, caffeine (for the most part) and youthful energy fuel us. Over the last year alone, Crayon has seen a growth rate of 3x, and we believe this is just the start.
We’re looking for young and young-at-heart professionals with a relentless drive to help Crayon double its growth. Leaders, doers, innovators, dreamers, implementers and eccentric visionaries, we have a place for you all.
Can you say “Yes, I have!” to the below?
- Experience with exploratory analysis, statistical analysis, and model development
- Knowledge of advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations
- Knowledge of software packages such as SAS, R, Rapidminer for analytical modelling and data management.
- Strong experience in SQL/ Python/R working efficiently at scale with large data sets
- Experience in using Business Intelligence tools such as PowerBI, Tableau, Metabase for business applications
Can you say “Yes, I will!” to the below?
- Drive clarity and solve ambiguous, challenging business problems using data-driven approaches. Propose and own data analysis (including modelling, coding, analytics) to drive business insight and facilitate decisions.
- Develop creative solutions and build prototypes to business problems using algorithms based on machine learning, statistics, and optimisation, and work with engineering to deploy those algorithms and create impact in production.
- Perform time-series analyses, hypothesis testing, and causal analyses to statistically assess the relative impact and extract trends
- Coordinate individual teams to fulfil client requirements and manage deliverable
- Communicate and present complex concepts to business audiences
- Travel to client locations when necessary
Crayon is an equal opportunity employer. Employment is based on a person's merit and qualifications and professional competences. Crayon does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, marital status, pregnancy or related.
More about Crayon: https://www.crayondata.com/">https://www.crayondata.com/
More about http://maya.ai/">maya.ai: https://maya.ai/">https://maya.ai/
Work Location : Chennai
Experience Level : 5+yrs
Package : Upto 18 LPA
Notice Period : Immediate Joiners
It's a full-time opportunity with our client.
Mandatory Skills:Machine Learning,Python,Tableau & SQL
Job Requirements:
--2+ years of industry experience in predictive modeling, data science, and Analysis.
--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.
--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.
--Experience writing code in Python and SQL with documentation for reproducibility.
--Strong Proficiency in Tableau.
--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.
--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.
--AWS Sagemaker experience is a plus not required.
We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.
Responsibilities:
- You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
- You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
- You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
- You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
- You will be building Domain Driven APIs as part of a micro-service architecture.
- You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
- You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.
Requirements:
- Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
- 2 to 5 years of product development experience.
- Experience building applications using Java, NodeJS, or Python.
- Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
- Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
- Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
- Experience using no-SQL databases like MongoDB or Elasticsearch.
- Prior experience with container orchestrators like Kubernetes is a plus.
We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.
Please visit https://govimana.com/ to learn more about what we do.
Why Explore a Career at VIMANA
- We recognize that our dedicated team members make us successful and we offer competitive salaries.
- We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
- You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
- Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!
VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.
1.Telephonic screening (30 Min )
A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds
2. Technical Rounds
This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.
3. HR Round
Candidate's team and cultural fit will be evaluated during this round
We would proceed with releasing the offer if the candidate clears all the above rounds.
Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.