Job Description: Data Scientist
At Propellor.ai, we derive insights that allow our clients to make scientific decisions. We believe in demanding more from the fields of Mathematics, Computer Science, and Business Logic. Combine these and we show our clients a 360-degree view of their business. In this role, the Data Scientist will be expected to work on Procurement problems along with a team-based across the globe.
We are a Remote-First Company.
Read more about us here: https://www.propellor.ai/consulting" target="_blank">https://www.propellor.ai/consulting
What will help you be successful in this role
- Articulate
- High Energy
- Passion to learn
- High sense of ownership
- Ability to work in a fast-paced and deadline-driven environment
- Loves technology
- Highly skilled at Data Interpretation
- Problem solver
- Ability to narrate the story to the business stakeholders
- Generate insights and the ability to turn them into actions and decisions
Skills to work in a challenging, complex project environment
- Need you to be naturally curious and have a passion for understanding consumer behavior
- A high level of motivation, passion, and high sense of ownership
- Excellent communication skills needed to manage an incredibly diverse slate of work, clients, and team personalities
- Flexibility to work on multiple projects and deadline-driven fast-paced environment
- Ability to work in ambiguity and manage the chaos
Key Responsibilities
- Analyze data to unlock insights: Ability to identify relevant insights and actions from data. Use regression, cluster analysis, time series, etc. to explore relationships and trends in response to stakeholder questions and business challenges.
- Bring in experience for AI and ML: Bring in Industry experience and apply the same to build efficient and optimal Machine Learning solutions.
- Exploratory Data Analysis (EDA) and Generate Insights: Analyse internal and external datasets using analytical techniques, tools, and visualization methods. Ensure pre-processing/cleansing of data and evaluate data points across the enterprise landscape and/or external data points that can be leveraged in machine learning models to generate insights.
- DS and ML Model Identification and Training: Identity, test, and train machine learning models that need to be leveraged for business use cases. Evaluate models based on interpretability, performance, and accuracy as required. Experiment and identify features from datasets that will help influence model outputs. Determine what models will need to be deployed, data points that need to be fed into models, and aid in the deployment and maintenance of models.
Technical Skills
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of them. We are open to promising candidates who are passionate about their work, fast learners and are team players.
- Strong experience with machine learning and AI including regression, forecasting, time series, cluster analysis, classification, Image recognition, NLP, Text Analytics and Computer Vision.
- Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, or similar.
- Strong experience with popular database programming languages including SQL.
- Strong experience in Spark/Pyspark
- Experience in working in Databricks
What are the company benefits you get, when you join us as?
- Permanent Work from Home Opportunity
- Opportunity to work with Business Decision Makers and an internationally based team
- The work environment that offers limitless learning
- A culture void of any bureaucracy, hierarchy
- A culture of being open, direct, and with mutual respect
- A fun, high-caliber team that trusts you and provides the support and mentorship to help you grow
- The opportunity to work on high-impact business problems that are already defining the future of Marketing and improving real lives
To know more about how we work: https://bit.ly/3Oy6WlE" target="_blank">https://bit.ly/3Oy6WlE
Whom will you work with?
You will closely work with other Senior Data Scientists and Data Engineers.
Immediate to 15-day Joiners will be preferred.
About Propellor.ai
Who we are
At Propellor, we are passionate about solving Data Unification challenges faced by our clients. We build solutions using the latest tech stack. We believe all solutions lie in the congruence of Business, Technology, and Data Science. Combining the 3, our team of young Data professionals solves some real-world problems. Here's what we live by:
Skin in the game
We believe that Individual and Collective success orientations both propel us ahead.
Cross Fertility
Borrowing from and building on one another’s varied perspectives means we are always viewing business problems with a fresh lens.
Sub 25's
A bunch of young turks, who keep our explorer mindset alive and kicking.
Future-proofing
Keeping an eye ahead, we are upskilling constantly, staying relevant at any given point in time.
Tech Agile
Tech changes quickly. Whatever your stack, we adapt speedily and easily.
If you are evaluating us to be your next employer, we urge you to read more about our team and culture here: https://bit.ly/3ExSNA2. We assure you, it's worth a read!
Similar jobs
As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.
Basic Qualifications
• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
- Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
- Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
- Assemble large, complex data sets from third-party vendors to meet business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.
Requirements
- 5+ years of experience in a Data Engineer role.
- Proficiency in Linux.
- Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
- Must have experience with Python/Scala.
- Must have experience with Big Data technologies like Apache Spark.
- Must have experience with Apache Airflow.
- Experience with data pipeline and ETL tools like AWS Glue.
- Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Designation: business analyst
Company name - promilo.com (sawara solutions pvt ltd)
Experience – 2 - 8 yrs.
Location: Bangalore
Mode – full time / work from office
About us:
Promilo is India’s 1st innovative platform which “pay to browse”
It is a b2b SaaS start-up that enables to accelerate the business appointment funnel of the
Companies. We’re an SaaS based advertising platform that connects both users & advertisers. Users will be able to book an online appointment based on their interests with the advertiser, without compromising their data privacy and get rewarded for sharing their data and time. We’re registered and recognized by start-up India, start-up Karnataka & MSME companies. Also, the top 100 Google AppScale academy start-up
Job description:
We are looking for an experienced business analyst to join our team. The ideal candidate will have 2-8 years of experience in web & mobile user & client data analyst for start-ups, with a strong passion to help start-ups and a proven track record to bring the strong business insight to improve the sales, marketing, user, client, ui, ux of the organisation.
Responsibilities
- Requirement gathering and analyzing
- Conduct gap analysis, assess scope & suggest solutions
- Responsible for technical proposal writing and time and cost analysis for web and mobile application development
- Preparing rfp/rfq
- Would be involved in presales activities
- Work as liaison between client and technical team
- Create wireframe | prototype | feature list | srs | brd & flow diagrams as per the client's requirement
- High it literacy proven use of web and associated technologies (excel, power point, google apps).
- Previous experience with data visualization tools (tableau, power bi, etc.), is strongly preferred.
- Cleanse and curate sourced data into standardized reporting templates
- Create, document, validate and ensure delivery of ad hoc, daily weekly, and monthly reports to internal stakeholders
- Create, validate and deliver tracking links for the marketing department
- Assist in the creation, qa, validation and reporting of a/b and multivariate tests
- Proactively monitor the marketing kpis and ua data on a daily basis
- Analyze marketing ua performance and conduct deep dive analysis to answer hypotheses and questions posed by the team
- Gather, transform, and analyze digital marketing data, including paid media, search, social, website, and conversion funnel analytics
- Analyze marketing data searching for top of funnel growth opportunities
- Analyze product data searching for insights to increase app engagement, conversion, and retention
- Analyze ltv/cac drivers to support overall business growth
- Partner with marketing, product, and growth teams
- Present findings to stakeholders and make recommendations for spend targets and campaign strategies
- Pov on ios 14 and upcoming android privacy changes and we can navigate tracking in light of these changes
- Pov on transition to skan 4.0
- Working knowledge of statistical techniques (regression, k-means clustering, pca)
- Experience with lift studies and marketing mix modeling working experience with python, r, & dbt
- Experience at a small company
- Experience with a subscription business
- Analyze website and mobile app data on traffic sources and patterns. Provide insight on data trends and anomalies, making recommendations where appropriate to improve business performance.
Qualification
- Master's or bachelor degree in computer science
- Well-versed with its technologies
- 2+ years of business analysis or project analysis experience
- Tech-savvy with proficiency in Microsoft office, google apps, and other web and mobile applications
- Excellent written and verbal communication skills
- Self-motivated, flexible, and comfortable with a fast-paced startup environment
- Advanced experience with Excel, google sheets including an understanding of visualizations, pivot tables, vlookup, and other key functions
- Experience with adobe analytics, google analytics, tableau, SQL, and data grid is a plus.
- Strong analytical and problem-solving skills, with clear attention to detail
- Ability to prioritize and work under tight deadlines
- Fast learner, able to master new concepts, theories, ideas, and processes with ease
- Experience creating user acquisition reports and dashboards
- Deep understanding of mobile attribution, cohort analysis, customer segmentation, and ltv modeling
- Experience pulling data and creating databases using an API of at least one of these ad platforms; Facebook, Snapchat, TikTok, google ads, applovin
- Experience with the architecture and deployment of mobile tracking solutions, including SDK integration, ad platforms APIs, server-postbacks, and mmp providers such as Appsflyer, adjust, kochava
If you are data driven individual with a passion for start-ups and have experience in business analytics, we encourage you to apply for this position. We offer a competitive salary package, flexible working hours, and a supportive work environment that fosters growth and development.
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
2. hands on experience using python, sql, tablaue
3. Data Analyst
About Amagi & Growth
Amagi Corporation is a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute and monetize live linear channels on Free-Ad-Supported TV and video services platforms. Amagi also offers 24x7 cloud managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 500+ channels on its platform for linear channel creation, distribution, and monetization with deployments in over 40 countries. Amagi has offices in New York (Corporate office), Los Angeles, and London, broadcast operations in New Delhi, and our Development & Innovation center in Bangalore. Amagi is also expanding in Singapore, Canada and other countries.
Amagi has seen phenomenal growth as a global organization over the last 3 years. Amagi has been a profitable firm for the last 2 years, and is now looking at investing in multiple new areas. Amagi has been backed by 4 investors - Emerald, Premji Invest, Nadathur and Mayfield. As of the fiscal year ending March 31, 2021, the company witnessed stellar growth in the areas of channel creation, distribution, and monetization, enabling customers to extend distribution and earn advertising dollars while saving up to 40% in cost of operations compared to traditional delivery models. Some key highlights of this include:
· Annual revenue growth of 136%
· 44% increase in customers
· 50+ Free Ad Supported Streaming TV (FAST) platform partnerships and 100+ platform partnerships globally
· 250+ channels added to its cloud platform taking the overall tally to more than 500
· Approximately 2 billion ad opportunities every month supporting OTT ad-insertion for 1000+ channels
· 60% increase in workforce in the US, UK, and India to support strong customer growth (current headcount being 360 full-time employees + Contractors)
· 5-10x growth in ad impressions among top customers
- Conducting advanced statistical analysis to provide actionable insights, identify trends, and measure performance
- Performing data exploration, cleaning, preparation and feature engineering; in addition to executing tasks such as building a POC, validation/ AB testing
- Collaborating with data engineers & architects to implement and deploy scalable solutions
- Communicating results to diverse audiences with effective writing and visualizations
- Identifying and executing on high impact projects, triage external requests, and ensure timely completion for the results to be useful
- Providing thought leadership by researching best practices, conducting experiments, and collaborating with industry leaders
What you need to have:
- 2-4 year experience in machine learning algorithms, predictive analytics, demand forecasting in real-world projects
- Strong statistical background in descriptive and inferential statistics, regression, forecasting techniques.
- Strong Programming background in Python (including packages like Tensorflow), R, D3.js , Tableau, Spark, SQL, MongoDB.
- Preferred exposure to Optimization & Meta-heuristic algorithm and related applications
- Background in a highly quantitative field like Data Science, Computer Science, Statistics, Applied Mathematics,Operations Research, Industrial Engineering, or similar fields.
- Should have 2-4 years of experience in Data Science algorithm design and implementation, data analysis in different applied problems.
- DS Mandatory skills : Python, R, SQL, Deep learning, predictive analysis, applied statistics
- Sr. Data Engineer:
Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python
Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred
Major accountabilities:
- Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
- Have good understanding on Foundry Platform landscape and it’s capabilities
- Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
- Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
- Designs data integrations and data quality framework.
- Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
- Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
- Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed
Desired Candidate Profile :
- Strong data engineering background
- Experience with Clinical Data Model is preferred
- Experience in
- SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
- Java and Groovy for our back-end applications and data integration tools
- Python for data processing and analysis
- Cloud infrastructure based on AWS EC2 and S3
- 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
- 5+ years of Python and Pyspark development experience
- Strong troubleshooting and problem solving skills
- BTech or master's degree in computer science or a related technical field
- Experience designing, building, and maintaining big data pipelines systems
- Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
- Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
- Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
- Hand-on experience in AWS / Azure cloud platform and stack
- Strong in API based architecture and concept, able to do quick PoC using API integration and development
- Knowledge of machine learning and AI
- Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.
Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision
Ganit Inc. is the fastest growing Data Science & AI company in Chennai.
Founded in 2017, by 3 industry experts who are alumnus of IITs/SPJIMR with each of them having 17+ years of experience in the field of analytics.
We are in the business of maximising Decision Making Power (DMP) for companies by providing solutions at the intersection of hypothesis based analytics, discovery based AI and IoT. Our solutions are a combination of customised services and functional product suite.
We primarily operate as a US-based start-up and have clients across US, Asia-Pacific, Middle-East and have offices in USA - New Jersey & India - Chennai.
Started with 3 people, the company is fast growing with 100+ employees
1. What do we expect from you
- Should posses minimum 2 years of experience of data analytics model development and deployment
- Skills relating to core Statistics & Mathematics.
- Huge interest in handling numbers
- Ability to understand all domains in businesses across various sectors
- Natural passion towards numbers, business, coding, visualisation
2. Necessary skill set:
- Proficient in R/Python, Advanced Excel, SQL
- Should have worked with Retail/FMCG/CPG projects solving analytical problems in Sales/Marketing/Supply Chain functions
- Very good understanding of algorithms, mathematical models, statistical techniques, data mining, like Regression models, Clustering/ Segmentation, time series forecasting, Decision trees/Random forest, etc.
- Ability to choose the right model for the right data and translate that into code in R, Python, VBA (Proven capabilities)
- Should have handled large datasets and with through understanding of SQL
- Ability to handle a team of Data Analysts
3. Good to have skill set:
- Microsoft PowerBI / Tableau / Qlik View / Spotfire
4. Job Responsibilities:
- Translate business requirements into technical requirements
- Data extraction, preparation and transformation
- Identify, develop and implement statistical techniques and algorithms that address business challenges and adds value to the organisation
- Create and implement data models
- Interact with clients for queries and delivery adoption
5. Screening Methodology
- Problem Solving round (Telephonic Conversation)
- Technical discussion round (Telephonic Conversation)
- Final fitment discussion (Video Round