- Assist our Growth Strategists in analyzing the results of A/B experiments.
- Analysis on customer engagement rates, customer acquisition projects ( ex. Churn rate prediction, Attribution etc. )
- Analyse marketing channel performance and Deep-dive reports to stakeholders and Management
- Building statistical experimentation templates for faster A/B outputs
- Work on forecasting models and assisting senior Management in creating frameworks on growth models
- Local implementation of marketing analytics projects related to improving marketing channels effectiveness, customer segmentation, campaign optimization, etc
- Monitor campaigns against key performance indicators (KPIs), be fully aware of trends and analytics, success and risks in order to achieve business objectives.
- Communicate complex ideas into understandable reports/documentation and this will include leveraging on leading software tools such as Tableau.
- 3-5 years of experience in strategy / consulting / analytical / project management roles; experience in e-commerce, Start-ups or Unicorns(CARS24,OLA,SWIGGY,FLIPKART,OYO) or entrepreneur experience preferred + At Least 2 years of experience leading a team
- Top-notch academics from a Tier 1 college (IIM / IIT/ NIT)
- Must have SQL/PostgreSQL/Tableau Experience.
- Added advantage: Experience with Google Analytics, CRM (MoEngage, Braze,Leanplum).
- Preferred knowledge of statistical computer languages (Python / R, etc).
- Analytical mindset with ability to present data in a structured and informative way
- Enjoy a fast-paced environment and can align business objectives with product priorities
- 3+ years experience in practical implementation and deployment of ML based systems preferred.
- BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
- Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Key work responsibilities
Supporting company mission by understanding complex business problems through data-driven solutions.
Developing end-to-end ML production-ready solutions and new algorithms.
Designing and developing machine learning techniques using Python
Implementing prototypes using suitable statistical tools and artificial intelligence algorithms.
Preparing high quality research papers and participating in conferences to present and report experimental results and research findings.
Driving innovation and efficiency by analysing the data insights ranging from data exploration and feature engineering for actionable insight.
Leading research and innovation and collaborating with internal team and facilitating review of ML-systems for innovative ideas to prototype new models.
Managing interns and new hires in developing research pipelines
Ph.D. / (Masters with 2+) years of industrial experience in Computer Science, Industrial or Electrical Engineering, Machine Learning, Mathematical Optimization, Statistics or Mathematics.
Practical experience with ML in research and development projects
Ability to work on multiple projects. Must have strong design and implementation skills.
Experience with large-scale production code development is a plus.
Ability to conduct research based on complex business problems.
Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
Must have programming experience in Python.
Familiarity with Docker, ML Libraries like PyTorch, sklearn, pandas, SQL is a plus
Excellent in English communication skills, both written and verbal.
● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
● Experience with big data tools: Hive/Hadoop, Spark, Kafka, Hive etc.
● Experience with querying multiple databases SQL/NoSQL, including
Oracle, MySQL and MongoDB etc.
● Experience in Redis, RabbitMQ, Elastic Search is desirable.
● Strong Experience with object-oriented/functional/ scripting languages:
Python(preferred), Core Java, Java Script, Scala, Shell Scripting etc.
● Must have debugging complex code skills, experience on ML/AI
algorithms is a plus.
● Experience in version control tool Git or any is mandatory.
● Experience with AWS cloud services: EC2, EMR, RDS, Redshift, S3
● Experience with stream-processing systems: Storm, Spark-Streaming,
- Ensure and own Data integrity across distributed systems.
- Extract, Transform and Load data from multiple systems for reporting into BI platform.
- Create Data Sets and Data models to build intelligence upon.
- Develop and own various integration tools and data points.
- Hands-on development and/or design within the project in order to maintain timelines.
- Work closely with the Project manager to deliver on business requirements OTIF (on time in full)
- Understand the cross-functional business data points thoroughly and be SPOC for all data-related queries.
- Work with both Web Analytics and Backend Data analytics.
- Support the rest of the BI team in generating reports and analysis
- Quickly learn and use Bespoke & third party SaaS reporting tools with little documentation.
- Assist in presenting demos and preparing materials for Leadership.
- Strong experience in Datawarehouse modeling techniques and SQL queries
- A good understanding of designing, developing, deploying, and maintaining Power BI report solutions
- Ability to create KPIs, visualizations, reports, and dashboards based on business requirement
- Knowledge and experience in prototyping, designing, and requirement analysis
- Be able to implement row-level security on data and understand application security layer models in Power BI
- Proficiency in making DAX queries in Power BI desktop.
- Expertise in using advanced level calculations on data sets
- Experience in the Fintech domain and stakeholder management.
We are looking for
A Senior Software Development Engineer (SDE2) who will be instrumental in the design and development of our backend technology, which manages our exhaustive data pipelines and AI models. Simplifying complexity and building technology that is robust and scalable is your North Star. You'll work closely alongside our CTO and machine learning engineers, frontend and wider technical team to build new capabilities, focused on speed and reliability.
You'll own your work, to build, test and iterate quickly, with direct guidance from our CTO.
Please note: You must have proven industry experience greater than 2 years.
Your work includes
- Own and manage the whole engineering infrastructure that supports Greendeck platform.
- Work to create highly scalable, highly robust and highly available python micro-services.
- Design the architecture to stream data on a huge scale across multiple services.
- Create and manage data pipelines using tools like Kafka, Celery.
- Deploy Serverless functions to process and manage data.
- Work with variety of databases and storage systems to store and strategically manage data.
- Write connections to collect data from various third party services, data storages and APIs.
- Strong experience in python creating scripts or apps or services
- Strong automation and scripting skills
- Knowledge of at least one SQL and No-SQL Database
- Experience of working with messaging systems like Kafka, RabbitMQ
- Good knowledge about data-frames and data-manipulation
- Have used and deployed apps using FastAPI or Flask or similar tech
- Knowledge of CI/CD paradigm
- Basic knowledge about Docker
- Have knowledge of creating and using REST APIs
- Good knowledge of OOP Fundamentals.
- (Optional) Knowledge about Celery/ Airflow
- (Optional) Knowledge about Lambda/ Serverless
- (Optional) Have connected apps using OAuth
What you can expect
- Attractive pay, bonus scheme and flexible vacation policy.
- A truly flexible, trust-based, performance driven work culture.
- Lunch is on us, everyday!
- A young and passionate team building elegant products with intricate technology for the future of businesses around the world. Our average age is 25!
- The chance to make a huge difference to the success of a world-class SaaS product and the opportunity to make an impact.
Its important to us
- That you relocate to Indore
- That you have a minimum of 2 years of experience working as a Software Developer
What you’ll do
- Deliver plugins for our Python-based ETL pipelines.
- Deliver Python microservices for provisioning and managing cloud infrastructure.
- Implement algorithms to analyse large data sets.
- Draft design documents that translate requirements into code.
- Deal with challenges associated with handling large volumes of data.
- Assume responsibilities from technical design through technical client support.
- Manage expectations with internal stakeholders and context-switch in a fast paced environment.
- Thrive in an environment that uses AWS and Elasticsearch extensively.
- Keep abreast of technology and contribute to the engineering strategy.
- Champion best development practices and provide mentorship.
What we’re looking for
- Experience in Python 3.
- Python libraries used for data (such as pandas, numpy).
- Performance tuning.
- Object Oriented Design and Modelling.
- Delivering complex software, ideally in a FinTech setting.
- CI/CD tools.
- Knowledge of design patterns.
- Sharp analytical and problem-solving skills.
- Strong sense of ownership.
- Demonstrable desire to learn and grow.
- Excellent written and oral communication skills.
- Mature collaboration and mentoring abilities.
About SteelEye Culture
- Work from home until you are vaccinated against COVID-19
- Top of the line health insurance • Order discounted meals every day from a dedicated portal
- Fair and simple salary structure
- 30+ holidays in a year
- Fresh fruits every day
- Centrally located. 5 mins to the nearest metro station (MG Road)
- Measured on output and not input
Experience - 2 to 5 Years
- Sound understanding of Google Cloud Platform
- Should have worked on Big Query, Workflow or Composer
- Experience of migrating to GCP and integration projects on large-scale environments
- ETL technical design, development and support
- Good in SQL skills and Unix Scripting
- Programming experience with Python, Java or Spark would be desirable, but not essential
- Good Communication skills .
- Experience of SOA and services-based data solutions, would be advantageous
- Partnering with clients and internal business owners (product, marketing, edit, etc.) to understand needs and develop models and products for Kaleidofin business line.
- Good understanding of the underlying business and workings of cross functional teams for successful execution
- Design and develop analyses based on business requirement needs and challenges.
- Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, hyperparameter tuning, ensembling etc.
- Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required.
- Identify and use appropriate investigative and analytical technologies to interpret and verify results.
- Apply and learn a wide variety of tools and languages to achieve results
- Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs.
- Collaborate with the team to improve the effectiveness of business decisions using data and machine learning/predictive modeling.
- Innovate on projects by using new modeling techniques or tools.
- Utilize effective project planning techniques to break down complex projects into tasks and ensure deadlines are kept.
- Communicate findings to team and leadership to ensure models are well understood and incorporated into business processes.
- 2+ year experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms.
- Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, Boosting trees, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, ensemble methods.
- Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools
- Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis.
- Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts.
- Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE)
- Experience in risk and credit scoring domains preferred