Responsibilities:
Complete accountability for delivering 1-2 projects from conception to implementation
Managing the team of Associates and Senior Associates
Interviewing client intelligently to learn important figures and gather requirements
Managing project timing, client expectations and meeting deadlines
Creating smart & impactful PowerPoint presentations
Playing proactive role in business development (preparing sales collaterals, pitch documents etc.)
and organization building (training, recruitment etc.)
Presentation of final results to the client and discuss further opportunities within/outside the
project along with maintaining documentation and reports
Plan deliverables and milestones for the project that you are responsible for
Provide business analysis and business area assessment
Facilitate meetings within the team on regular basis
Track and report team hours
Behavioural Competences:
This candidate must have the ability to think strategically and analytically in order to effectively
assess each assignment
Excellent written and oral communication skills
Ability to work in tight deadlines & under pressure
Excellent interpersonal & organizational skills
Good listening and comprehension and management skills
Willingness to travel/work abroad
About Affine Analytics
Similar jobs
Senior Software Engineer/Technical Lead - Data Fabric
at IDfy
Who is IDfy?
IDfy is the Fintech ScaleUp of the Year 2021. We build technology products that identify people accurately. This helps businesses prevent fraud and engage with the genuine with the least amount of friction. If you have opened an account with HDFC Bank or ordered from Amazon and Zomato or transacted through Paytm and BharatPe or played on Dream11 and MPL, you might have already experienced IDfy. Without even knowing it. Well…that’s just how we roll. Global credit rating giant TransUnion is an investor in IDfy. So are international venture capitalists like MegaDelta Capital, BEENEXT, and Dream Incubator. Blume Ventures is an early investor and continues to place its faith in us. We have kept our 500 clients safe from fraud while helping the honest get the opportunities they deserve. Our 350-people strong family works and plays out of our offices in suburban Mumbai. IDfy has run verifications on 100 million people. In the next 2 years, we want to touch a billion users. If you wish to be part of this journey filled with lots of action and learning, we welcome you to be part of the team!
What are we looking for?
As a senior software engineer in Data Fabric POD, you would be responsible for producing and implementing functional software solutions. You will work with upper management to define software requirements and take the lead on operational and technical projects. You would be working with a data management and science platform which provides Data as a service (DAAS) and Insight as a service (IAAS) to internal employees and external stakeholders.
You are eager to learn technology-agnostic who loves working with data and drawing insights from it. You have excellent organization and problem-solving skills and are looking to build the tools of the future. You have exceptional communication skills and leadership skills and the ability to make quick decisions.
YOE: 3 - 10 yrs
Position: Sr. Software Engineer/Module Lead/Technical Lead
Responsibilities:
- Work break-down and orchestrating the development of components for each sprint.
- Identifying risks and forming contingency plans to mitigate them.
- Liaising with team members, management, and clients to ensure projects are completed to standard.
- Inventing new approaches to detecting existing fraud. You will also stay ahead of the game by predicting future fraud techniques and building solutions to prevent them.
- Developing Zero Defect Software that is secured, instrumented, and resilient.
- Creating design artifacts before implementation.
- Developing Test Cases before or in parallel with implementation.
- Ensuring software developed passes static code analysis, performance, and load test.
- Developing various kinds of components (such as UI Components, APIs, Business Components, image Processing, etc. ) that define the IDfy Platforms which drive cutting-edge Fraud Detection and Analytics.
- Developing software using Agile Methodology and tools that support the same.
Requirements:
- Apache BEAM, Clickhouse, Grafana, InfluxDB, Elixir, BigQuery, Logstash.
- An understanding of Product Development Methodologies.
- Strong understanding of relational databases especially SQL and hands-on experience with OLAP.
- Experience in the creation of data ingestion pipelines and ETL pipeline (Good to have Apache Beam or Apache Airflow experience).
- Strong design skills in defining API Data Contracts / OOAD / Microservices / Data Models.
Good to have:
- Experience with TimeSeries DBs (we use InfluxDB) and Alerting / Anomaly Detection Frameworks.
- Visualization Layers: Metabase, PowerBI, Tableau.
- Experience in developing software in the Cloud such as GCP / AWS.
- A passion to explore new technologies and express yourself through technical blogs.
Who Are We?
Vahak (https://www.vahak.in) is India’s largest & most trusted online transport marketplace & directory for road transport businesses and individual commercial vehicle (Trucks, Trailers, Containers, Hyva, LCVs) owners for online truck and load booking, transport business branding and transport business network expansion. Lorry owners can find intercity and intracity loads from all over India and connect with other businesses to find trusted transporters and best deals in the Indian logistics services market. With the Vahak app, users can book loads and lorries from a live transport marketplace with over 7 Lakh + Transporters and Lorry owners in over 10,000+ locations for daily transport requirements.
Vahak has raised a capital of $5+ Million in a Pre-Series A round from RTP Global along with participation from Luxor Capital and Leo Capital. The other marquee angel investors include Kunal Shah, Founder and CEO, CRED; Jitendra Gupta, Founder and CEO, Jupiter; Vidit Aatrey and Sanjeev Barnwal, Co-founders, Meesho; Mohd Farid, Co-founder, Sharechat; Amrish Rau, CEO, Pine Labs; Harsimarbir Singh, Co-founder, Pristyn Care; Rohit and Kunal Bahl, Co-founders, Snapdeal; and Ravish Naresh, Co-founder and CEO, Khatabook.
Manager Data Science:
We at Vahak, are looking for an enthusiastic and passionate Manager of Data Science, to join our young & diverse team.You will play a key role in the data science group, working with different teams, identifying the use cases that could be solved by application of data science techniques.
Our goal as a group is to drive powerful, big data analytics products with scalable results.We love people who are humble and collaborative with hunger for excellence.
Responsibilities:
- Mine and Analyze end to end business data and generate actionable insights. Work will involve analyzing Customer transaction data, Marketing Campaign performance analysis, identifying process bottlenecks, business performance analysis etc.
- Identify data driven opportunities to drive optimization and improvement of product development, marketing techniques and business strategies.
- Collaborate with Product and growth teams to test and learn at unprecedented pace and help the team achieve substantial upside in key metrics
- Actively participate in the OKR process and help team democratize the key KPIs and metrics that drive various objectives
- Comfortable with digital marketing campaign concepts, use of marketing campaign platforms such as Google Adwords and Facebook Ads
- Responsible for design of algorithms that require different advanced analytics techniques and heuristics to work together
- Create dashboard and visualization from scratch and present data in logical manner to all the stakeholders
- Collaborates with internal teams to create actionable items based off analysis; works with the datasets to conduct complex quantitative analysis and helps drive the innovation for our customers
Requirements:
- Bachelor’s or Masters degree in Engineering, Science, Maths, Economics or other quantitative fields. MBA is a plus but not required
- 5+ years of proven experience working in Data Science field preferably in ecommerce/web based or consumer technology companies
- Thorough understanding of implementation and analysis of product and marketing metrics at scale
- Strong problem solving skills with an emphasis on product development.
- Fluency in statistical computer languages like SQL, Python, R as well as a deep understanding of statistical analysis, experiments designs and common pitfalls of data analysis
- Should have worked in a relational database like Oracle or Mysql, experience in Big Data systems like Bigquery or Redshift a definite plus
- Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
- Conducting advanced statistical analysis to provide actionable insights, identify trends, and measure performance
- Performing data exploration, cleaning, preparation and feature engineering; in addition to executing tasks such as building a POC, validation/ AB testing
- Collaborating with data engineers & architects to implement and deploy scalable solutions
- Communicating results to diverse audiences with effective writing and visualizations
- Identifying and executing on high impact projects, triage external requests, and ensure timely completion for the results to be useful
- Providing thought leadership by researching best practices, conducting experiments, and collaborating with industry leaders
What you need to have:
- 2-4 year experience in machine learning algorithms, predictive analytics, demand forecasting in real-world projects
- Strong statistical background in descriptive and inferential statistics, regression, forecasting techniques.
- Strong Programming background in Python (including packages like Tensorflow), R, D3.js , Tableau, Spark, SQL, MongoDB.
- Preferred exposure to Optimization & Meta-heuristic algorithm and related applications
- Background in a highly quantitative field like Data Science, Computer Science, Statistics, Applied Mathematics,Operations Research, Industrial Engineering, or similar fields.
- Should have 2-4 years of experience in Data Science algorithm design and implementation, data analysis in different applied problems.
- DS Mandatory skills : Python, R, SQL, Deep learning, predictive analysis, applied statistics
Job Description |
Job Title: Data Engineer |
Tech Job Family: DACI |
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) |
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering |
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) |
Preferred Qualifications: |
• Master's Degree in Computer Science, CIS, or related field |
• 2 years of IT experience developing and implementing business systems within an organization |
• 4 years of experience working with defect or incident tracking software |
• 4 years of experience with technical documentation in a software development environment |
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework |
• 2 years of experience leading teams, with or without direct reports |
• Experience with application and integration middleware |
• Experience with database technologies |
Data Engineering |
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role) |
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role) |
BI Engineering |
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role) |
Platform Engineering |
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role) |
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role) |
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law. |
Strong experience on SQL and relational databases
- Good programming exp on Scala & Spark
- Good exp on ETL batch data pipelines development and migration/upgrading
- Python – Good to have.
- AWS – Good to have
- Knowledgeable in the areas of Big data/Hadoop/S3/HIVE. Nice to have exp on ETL frameworks (ex: Airflow, Flume, Oozie etc.)
- Ability to work independently, take ownership and strong troubleshooting/debugging skills
- Good communication and collaboration skills
Why us?
We at Wow Labz are always striving to look for exciting problems to solve. Whether we’re creating new products or helping a small startup extend its reach, we build from our heart. We’re entrepreneurial and we love new ideas. Fun culture with a team that cares about your development and growth.
What are we looking for?
We are looking for an expert in machine learning to help us extract maximum value from our data. You will be leading all the processes from data collection, cleaning, and preprocessing, to training models and deploying them to production. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Role & Responsibilities:
- Identify valuable data sources and automate collection processes
- Study and transform data science prototypes
- Research and Implement appropriate ML algorithms and tools
- Develop machine learning applications according to requirements
- Extend existing ML libraries and frameworks
- Cross-validate models to ensure their generalizability
- Present information using data visualization techniques
- Propose solutions and strategies to business challenges
- Collaborate with engineering and product development teams
- Guide and mentor the respective teams
Desired Skills and Experience:
- Proven experience as a Machine Learning Engineer or similar role
- Demonstrable history of devising and overseeing data-centered projects
- Understanding of data structures, data modeling and software architecture
- Deep knowledge of math, probability, statistics and algorithms
- Experience with cloud platforms like AWS/Azure/GCP
- Knowledge of server configurations and maintenance.
- Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
- Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
- Ability to select hardware to run an ML model with the required latency
- Excellent communication skills
- Ability to work in a team
- Outstanding analytical and problem-solving skills
Must have:
- Inclination towards Mathematics and statistics to understand the algorithms at a deeper level
- Strong OOPs concepts (python preferable)
- Hands on experience with Flask or Django
- Ability to learn latest deployed models and understand their core architecture to gain breadth of expertise
Persona of the kind of people who would be a culture fit:
- You are curious and aware of the latest tech trends
- You are self-driven
- You get a kick out of leading a solution towards its completion.
- You have the capacity to foster a healthy, stimulating work environment that frequently harnesses teamwork
- You are fun to hang out with!
About antuit.ai
Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.
Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.
The Role:
Antuit is looking for a Data / Sr. Data Scientist who has the knowledge and experience in developing machine learning algorithms, particularly in supply chain and forecasting domain with data science toolkits like Python.
In this role, you will design the approach, develop and test machine learning algorithms, implement the solution. The candidate should have excellent communication skills and be results driven with a customer centric approach to problem solving. Experience working in the demand forecasting or supply chain domain is a plus. This job also requires the ability to operate in a multi-geographic delivery environment and a good understanding of cross-cultural sensitivities.
Responsibilities:
Responsibilities includes, but are not limited to the following:
- Design, build, test, and implement predictive Machine Learning models.
- Collaborate with client to align business requirements with data science systems and process solutions that ensure client’s overall objectives are met.
- Create meaningful presentations and analysis that tell a “story” focused on insights, to communicate the results/ideas to key decision makers.
- Collaborate cross-functionally with domain experts to identify gaps and structural problems.
- Contribute to standard business processes and practices as part of a community of practise.
- Be the subject matter expert across multiple work streams and clients.
- Mentor and coach team members.
- Set a clear vision for the team members and working cohesively to attain it.
Qualifications and Skills:
Requirements
- Experience / Education:
- Master’s or Ph.D. in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Mathematics or other related
- 5+ years’ experience working in applied machine learning or relevant research experience for recent Ph.D. graduates.
- Highly technical:
- Skilled in machine learning, problem-solving, pattern recognition and predictive modeling with expertise in PySpark and Python.
- Understanding of data structures and data modeling.
- Effective communication and presentation skills
- Able to collaborate closely and effectively with teams.
- Experience in time series forecasting is preferred.
- Experience working in start-up type environment preferred.
- Experience in CPG and/or Retail preferred.
- Effective communication and presentation skills.
- Strong management track record.
- Strong inter-personal skills and leadership qualities.
Information Security Responsibilities
- Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
- Take part in Information Security training and act accordingly while handling information.
- Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).
EEOC
Antuit.ai is an at-will, equal opportunity employer. We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
Strong knowledge in Power BI (DAX + Power Query + Power BI Service + Power BI
Desktop Visualisations) and Azure Data Storages.
Should have experience in Power BI mobile Dashboards.
Strong knowledge in SQL.
Good knowledge of DWH concepts.
Work as an independent contributor at the client location.
Implementing Access Control and impose required Security.
Candidate must have very good communication skills.
Intern - AI [technology]
Big Data Developer
Role Summary/Purpose:
We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.
Requirements:
- The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
- Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
- Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
- Excellent knowledge in SQL & Linux Shell scripting
- Bachelors/Master’s/Engineering Degree from a well-reputed university.
- Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
- Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
- Ability to manage a diverse and challenging stakeholder community
- Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.
Responsibilities
- Should works as a senior developer/individual contributor based on situations
- Should be part of SCRUM discussions and to take requirements
- Adhere to SCRUM timeline and deliver accordingly
- Participate in a team environment for the design, development and implementation
- Should take L3 activities on need basis
- Prepare Unit/SIT/UAT testcase and log the results
- Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
- Quality delivery and automation should be a top priority
- Co-ordinate change and deployment in time
- Should create healthy harmony within the team
- Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders