Company Introduction :- Emproto Technologies is a Fast Growing Product Development firm offering services across Product Strategy, UI/UX Design and application development. Our mission is to give wings to ideas. Founded by IIM Graduates, Our core team has 50 + years of Product Development experience.- Emproto has worked with various organizations ranging from startups like Grexter Coliving, Phable care, Design Cafe, Innoviti Payment Solutions to Large corporates like Asian Paints, Titan, etc. We are growing fast and are looking for excellent, committed developers to join its army of tech/product leaders.Job Requirements :- Any experience working with Data Lakes is a plus.- Ability to understand complex data models, map the source data model to the target data model and define the necessary transformations.- Strong in SQL Scripting.- Experience in data transformation using more than one ETL tool.- Experience with Big Data stores like Hadoop/Redshift/BigQuery and Data blending/ wrangling using tools (such as Talend, Alteryx, etc.) is a plus- Strong coding skills in Python- Ability to use a wide variety of databases (both SQL and NoSQL) and tools- Must have good problem solving and analytical skills.- Must be a team player, having good communication skills, self-starter- Ability to pick up new technology is needed.
The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes Required Experience, Skills and Qualifications: Hands on experience on Big Data tools/technologies like Spark, Databricks, Map Reduce, Hive, HDFS. Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi Proficiency in any of the programming language: Python/ Scala/ Java with 4+ years’ experience Experience in Cloud infrastructures like MS Azure, Data lake etc Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
Company Overview: Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen. -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world. Website : https://www.rakuten.com/ Crunchbase : Rakuten has raised a total of $42.4M in funding over 2 rounds Companysize : 10,001 + Employees Founded : 1997 Headquarters : Tokyo, Japan Work location : Bangalore (M.G.Road) Please find below Job Description. Role Description – Data Engineer for AN group (Location - India) Key responsibilities include: We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities – Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc) Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang. Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives Work with cross functional engineering teams including QA, Platform Delivery and DevOps Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market Not afraid of refactoring existing system and guiding the team about same. Experience with Event driven Architecture, Complex Event Processing Extensive experience building and owning large- scale distributed backend systems.
We are looking for a Business Intelligence (BI) Developer to create and manage BI and analytics solutions that turn data into knowledge. Experience Level 2 years of experience in PowerBI Responsibilities: 1. Translate business needs to technical specifications 2. Design, build and deploy BI solutions (e.g. reporting tools) 3. Collaborate with teams to integrate systems 4. Develop and execute database queries and conduct analysis 5. Create Visualizations, Dashboards and Reports for requested projects 6. Develop and update technical documentation Required Skills - 2 years of experience working with Power BI Strong knowledge and experience in Power BI - DAX + Power Query + Power BI Service + Power BI Desktop Visualizations Strong understanding of database management systems, ETL (Extract, transform, load) framework (basics) Knowledge and experience of SQL Intermediate-Advance MS Excel knowledge and experience Ability to take initiative and be innovative Analytical mind with a problem-solving aptitude Excellent communication BSc/ MSc in Computer Science MSc/MCA, Engineering, or relevant field Basic Knowledge and understanding of any programming language like .NET is a plus
Want to shape the future of Energy through Data Science? We have the data and if you have got the skills to unlock the patterns behind how a little change in one input parameter can have so much impact on the optimized Energy output parameters, like energy price. The Energy Exemplar (EE) data team is looking for an experienced Applied ML Data Scientist to join our Pune office. The data team is committed to helping EE customers keep a check on how heat rate, capacity expansion and daily unit commitment are subject to variations in demand, renewables, gas price, etc. There are lots of such use cases. By continuously gathering and analysing data, and by working with organizations inside and outside EE, the data team stays agile to combat evolving challenges. Our mission is to help advice customers and systems with industry-leading proactive optimal predictions, and engage in valuable partnerships. As a dedicated Data Scientist on our Research team, you will apply data science and your machine learning expertise to enhance our intelligent systems to predict and provide proactive advice. You’ll work with the team to identify and build features, create experiments, vet ML models, and ship successful models that provide value additions for hundreds of EE customers. At EE, you’ll have access to vast amounts of energy-related data from our sources. Our data pipelines are curated and supported by engineering teams (so you won't have to do much data engineering - you get to do the fun stuff.) We also offer many company-sponsored classes and conferences that focus on data science and ML. There’s great growth opportunity for data science at EE. Responsibilities Monitor and analyse data to uncover optimization gaps Develop high-performance algorithms, machine learning models, or other methodologies to close optimization gaps. Identify performant features and models and make them universally accessible to our teams across EE. Provide technical leadership to our team by reviewing problem sets, proposing prediction models, and reviewing experiments and models. Act as a resident expert for machine learning, statistics, and experiment design. Qualifications 5+ years of professional experience in experiment design and applied machine learning predicting outcomes in large-scale, complex datasets. Proficiency in Python, Azure ML, or other statistics/ML tools. Proficiency in Deep Neural Network, Python based frameworks. Proficiency in Azure DataBricks, Hive, Spark. Proficiency in deploying models into production (Azure stack). Moderate coding skills. SQL or similar required. C# or other languages strongly preferred. Outstanding communication and collaboration skills. You can learn from and teach others. Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services. Experience with prediction in adversarial (energy) environments highly desirable. Understanding of the model development ecosystem across platforms, including development, distribution, and best practices, highly desirable. A Masters or Ph.D degree with coursework in Statistics, Data Science, Experimentation Design, and Machine Learning highly desirable
Role: Data Engineer Company: PayU Location: Bangalore/ Mumbai Experience : 2-5 yrs About Company:PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services. Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services. India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. Job responsibilities: Design infrastructure for data, especially for but not limited to consumption in machine learning applications Define database architecture needed to combine and link data, and ensure integrity across different sources Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack. Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions Requirements to be successful in this role: Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica. Strong experience with scalable compute solutions such as in Kafka, Snowflake Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale
We are looking for an exceptional Software Developer for our Data Engineering India team who can- contribute to building a world-class big data engineering stack that will be used to fuel us Analytics and Machine Learning products. This person will be contributing to the architecture, operation, and enhancement of: Our petabyte-scale data platform with a key focus on finding solutions that can support Analytics and Machine Learning product roadmap. Everyday terabytes of ingested data need to be processed and made available for querying and insights extraction for various use cases. About the Organisation: - It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work. - We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom, and India. - You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual. Job DescriptionPosition:Software Developer, Data Engineering teamLocation: Pune(Initially 100% Remote due to Covid 19 for coming 1 year) Our bespoke Machine Learning pipelines. This will also provide opportunities to contribute to the prototyping, building, and deployment of Machine Learning models. You: Have at least 4+ years’ Experience. Deep technical understanding of Java or Golang. Production experience with Python is a big plus, extremely valuable supporting skill for us. Exposure to modern Big Data tech: Cassandra/Scylla, Kafka, Ceph, the Hadoop Stack, Spark, Flume, Hive, Druid etc… while at the same time understanding that certain problems may require completely novel solutions. Exposure to one or more modern ML tech stacks: Spark ML-Lib, TensorFlow, Keras, GCP ML Stack, AWS Sagemaker - is a plus. Experience includes working in Agile/Lean model Experience with supporting and troubleshooting large systems Exposure to configuration management tools such as Ansible or Salt Exposure to IAAS platforms such as AWS, GCP, Azure… Good addition - Experience working with large-scale data Good addition - Good to have experience architecting, developing, and operating data warehouses, big data analytics platforms, and high velocity data pipelines**** Not looking for a Big Data Developer / Hadoop Developer
Designation: Specialist - Cloud Service Developer (ABL_SS_600) Position description: The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc. Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency Monitor & Optimize the performance using AWS dashboards and logs Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements Work with the cloud team in agile approach and develop cost optimized solutions Primary Responsibilities: Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc. Reporting Team Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414) Reporting Department: Application Development (2487) Required Skills: AWS certification would be preferred Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration) Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services. Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration) Good in data structure, programming in (pyspark / python / golang / Scala)
About antuit.ai Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through. Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital. The Role: Antuit is looking for a Data / Sr. Data Scientist who has the knowledge and experience in developing machine learning algorithms, particularly in supply chain and forecasting domain with data science toolkits like Python. In this role, you will design the approach, develop and test machine learning algorithms, implement the solution. The candidate should have excellent communication skills and be results driven with a customer centric approach to problem solving. Experience working in the demand forecasting or supply chain domain is a plus. This job also requires the ability to operate in a multi-geographic delivery environment and a good understanding of cross-cultural sensitivities. Responsibilities: Responsibilities includes, but are not limited to the following: Design, build, test, and implement predictive Machine Learning models. Collaborate with client to align business requirements with data science systems and process solutions that ensure client’s overall objectives are met. Create meaningful presentations and analysis that tell a “story” focused on insights, to communicate the results/ideas to key decision makers. Collaborate cross-functionally with domain experts to identify gaps and structural problems. Contribute to standard business processes and practices as part of a community of practise. Be the subject matter expert across multiple work streams and clients. Mentor and coach team members. Set a clear vision for the team members and working cohesively to attain it. Qualifications and Skills: Requirements Experience / Education: Master’s or Ph.D. in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Mathematics or other related 5+ years’ experience working in applied machine learning or relevant research experience for recent Ph.D. graduates. Highly technical: Skilled in machine learning, problem-solving, pattern recognition and predictive modeling with expertise in PySpark and Python. Understanding of data structures and data modeling. Effective communication and presentation skills Able to collaborate closely and effectively with teams. Experience in time series forecasting is preferred. Experience working in start-up type environment preferred. Experience in CPG and/or Retail preferred. Effective communication and presentation skills. Strong management track record. Strong inter-personal skills and leadership qualities. Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in Information Security training and act accordingly while handling information. Report all suspected security and policy breach to Infosec team or appropriate authority (CISO). EEOC Antuit.ai is an at-will, equal opportunity employer. We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.