We develop and publish mobile games that millions love. With global hits and 450 million+ downloads, we're the UK's biggest hypercasual games company!
Primary Responsibilities: • Responsible for developing and maintaining applications with PySpark • Contribute to the overall design and architecture of the application developed and deployed. • Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc. • Interact with business users to understand requirements and troubleshoot issues. • Implement Projects based on functional specifications. Must-Have Skills: OVERALL EXPERIENCE SHOULD BE 6- 10 YEARS & RELEVANT AND BANKING DOMAIN EXPERIENCE SHOULD BE MINIMUM 3 YEARS • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skills =>Any cloud experience is mandatory.Perks:- Work Visa Sponsorship UAE Medical insurance
We are looking for a machine learning (ML) developer to help us create artificial intelligence products for our clients. To do this job successfully, you need exceptional skills in statistics and programming . You will need to interact with the client and their teams to complete the jobThe day-to-day responsibilities include:1. Design machine learning systems2. Research and implement appropriate ML algorithms and tools3. Develop machine learning applications according to requirements4. Select appropriate datasets and data representation methods5. Run machine learning tests and experiments6. Interact with client team7. Understanding the client problems and resolving those
Want to shape the future of Energy through Data Science? We have the data and if you have got the skills to unlock the patterns behind how a little change in one input parameter can have so much impact on the optimized Energy output parameters, like energy price. The Energy Exemplar (EE) data team is looking for an experienced Applied ML Data Scientist to join our Pune office. The data team is committed to helping EE customers keep a check on how heat rate, capacity expansion and daily unit commitment are subject to variations in demand, renewables, gas price, etc. There are lots of such use cases. By continuously gathering and analysing data, and by working with organizations inside and outside EE, the data team stays agile to combat evolving challenges. Our mission is to help advice customers and systems with industry-leading proactive optimal predictions, and engage in valuable partnerships. As a dedicated Data Scientist on our Research team, you will apply data science and your machine learning expertise to enhance our intelligent systems to predict and provide proactive advice. You’ll work with the team to identify and build features, create experiments, vet ML models, and ship successful models that provide value additions for hundreds of EE customers. At EE, you’ll have access to vast amounts of energy-related data from our sources. Our data pipelines are curated and supported by engineering teams (so you won't have to do much data engineering - you get to do the fun stuff.) We also offer many company-sponsored classes and conferences that focus on data science and ML. There’s great growth opportunity for data science at EE. Responsibilities Monitor and analyse data to uncover optimization gaps Develop high-performance algorithms, machine learning models, or other methodologies to close optimization gaps. Identify performant features and models and make them universally accessible to our teams across EE. Provide technical leadership to our team by reviewing problem sets, proposing prediction models, and reviewing experiments and models. Act as a resident expert for machine learning, statistics, and experiment design. Qualifications 5+ years of professional experience in experiment design and applied machine learning predicting outcomes in large-scale, complex datasets. Proficiency in Python, Azure ML, or other statistics/ML tools. Proficiency in Deep Neural Network, Python based frameworks. Proficiency in Azure DataBricks, Hive, Spark. Proficiency in deploying models into production (Azure stack). Moderate coding skills. SQL or similar required. C# or other languages strongly preferred. Outstanding communication and collaboration skills. You can learn from and teach others. Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services. Experience with prediction in adversarial (energy) environments highly desirable. Understanding of the model development ecosystem across platforms, including development, distribution, and best practices, highly desirable. A Masters or Ph.D degree with coursework in Statistics, Data Science, Experimentation Design, and Machine Learning highly desirable
3+ years of experience in Machine Learning Bachelors/Masters in Computer Engineering/Science. Bachelors/Masters in Engineering/Mathematics/Statistics with sound knowledge of programming and computer concepts. 10 and 12th acedemics 70 % & above. Skills : - Strong Python/ programming skills - Good conceptual understanding of Machine Learning/Deep Learning/Natural Language Processing - Strong verbal and written communication skills. - Should be able to manage team, meet project deadlines and interface with clients. - Should be able to work across different domains and quickly ramp up the business processes & flows & translate business problems into the data solutions
Understanding business objectives and developing models that help to achieve them,along with metrics to track their progressManaging available resources such as hardware, data, and personnel so that deadlinesare metAnalysing the ML algorithms that could be used to solve a given problem and rankingthem by their success probabilityExploring and visualizing data to gain an understanding of it, then identifyingdifferences in data distribution that could affect performance when deploying the modelin the real worldVerifying data quality, and/or ensuring it via data cleaningSupervising the data acquisition process if more data is neededDefining validation strategiesDefining the pre-processing or feature engineering to be done on a given datasetDefining data augmentation pipelinesTraining models and tuning their hyper parametersAnalysing the errors of the model and designing strategies to overcome themDeploying models to production
Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth. Pipelines should be optimised to handle both real time data, batch update data and historical data. Establish scalable, efficient, automated processes for complex, large scale data analysis. Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques. Participate in data pipelines health monitoring and performance optimisations as well as quality documentation. Interact with end users/clients and translate business language into technical requirements. Acts independently to expose and resolve problems.Job Requirements :- 2+ years experience working in software development & data pipeline development for enterprise analytics. 2+ years of working with Python with exposure to various warehousing tools In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc. Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must. Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement. Strong verbal and written communication skills with other developers and business client. Knowledge of Logistics and/or Transportation Domain is a plus. Hands-on with traditional databases and ERP systems like Sybase and People-soft.
Description About Zycus : Headquartered in Princeton, U.S. in 1998, Zycus has grown every day to be established as an organization which now is a leading global provider of complete Source-to-Pay suite of procurement performance solutions. We develop cloud-based (SaaS) Source-to-Pay solutions for large global enterprises, and have successfully deployed about 200 solutions to over 1000 Global clients. We are proud to have as our clients, some of the best-of- breed companies across verticals like Manufacturing, Automotives, Banking and Finance, Oil and Gas, Food Processing, Electronics, Telecommunications, Chemicals, Health and Pharma, Education and more. With a team of 1000+employees, we are present in India with 3 development centers at Bengaluru, Mumbai & Pune and offices in the U.S., U.K., Australia, Dubai, Netherlands and Singapore. Know more about the LEADER of: Gartner’s 2013, 2015 & 2017 Magic Quadrant for Strategic Sourcing Application Suites and The Forrester Wave™: eProcurement, Q2 2017 We are in process of launching Merlin A.I. Studio™. The artificial intelligence (AI)-based platform will allow procurement teams to to build and deploy bots across the source-to-pay process. The bots will be used by firms leveraging more than 1,100 APIs from Zycus’ solution suite. “By deploying the intelligent bots from Merlin A.I. Studio™, procurement can put themselves in cruise control mode as the bots work towards accomplishing tasks with zero human intervention,” The Fortune 500-serving firm explained in its press release “Be it running an RFI event, discovering contract risks, negotiating with suppliers or transnational procurement; all one needs to do is launch the bot and see the magic unfold.” “It will empower procurement to transform their routine, repetitive & mundane procurement tasks, so that time, effort & resources can be optimized towards more strategic initiatives.”Exp : 1 to 10 Years Role : Data Scientist Location : Bangalore Education : Any Engineering From IIT ,NIT , IIIT ,VIT , BITS Pilani Please carry your original ID proof along with Hard Copy of Resume Requirements We are especially looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply. If you are passionate about research and developing innovative technologies of interest to Zycus and the research community at large, the BigData Experience Lab may be the right place for you. All successful candidates are expected to dive deep into problem areas of Zycus’s interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage Zycus Skills Master’s or Ph.D. in statistics, mathematics, or computer science Only from Tier 1 Colleges Experience using statistical computer languages such as R, Python, SQL, etc. Experience in statistical and data mining techniques, including generalized linear model/regression, random forest, boosting, trees, text mining, social network analysis Experience working with and creating data architectures Knowledge of machine learning techniques such as clustering, decision tree learning, and artificial neural networks Knowledge of advanced statistical techniques and concepts, including regression, properties of distributions, and statistical tests 2-10 years of experience manipulating data sets and building statistical models Experience using web services: Redshift, S3, Spark, DigitalOcean, etc. Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc. Experience visualizing/presenting data. Data Scientist will report in to Director Engineering - Data Scentist & the roles & responsibilities are as below: Work as the data strategist, identifying and integrating new datasets that can be leveraged through our product capabilities and work closely with the engineering team to strategize and execute the development of data products Execute analytical experiments methodically to help solve various problems and make a true impact across various domains and industries Identify relevant data sources and sets to mine for client business needs, and collect large structured and unstructured datasets and variables Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models, and clean and validate data for uniformity and accuracy Analyze data for trends and patterns, and Interpret data with a clear objective in mind Implement analytical models into production by collaborating with software developers and machine learning engineers. Communicate analytic solutions to stakeholders and implement improvements as needed to operational systems Benefits : Along with a competitive compensation structure, Zycus believes in an open culture learning environment, where everyone gets a chance to share their ideas and deliver par excellence. Here's a sneak peek to our life at Zycus.
WyngCommerce is building a Global Enterprise AI Platform for top tier brands and retailers to drive profitability for our clients. Our vision is to develop a self-learning retail backend that enables our clients to become more agile and responsive to demand and supply volatilities. We are looking for a Business Analyst to join our team. As a BA, you will take end-to-end ownership of on-boarding new clients, running proof-of-concepts and pilots with them on different AI product applications, and ensuring timely product roll-out and customer success for the clients. You will also be expected to drive significant inputs to the sales, engineering, data science and product team to help us build for scale. An eye for detail, ability to process and analyze data quickly, and communicating effectively with different teams (Client, Sales and Engineering) are the qualities we are looking for. There will be opportunities to grow up within the same job family (lead a team of analysts) or to move to other areas of business like customer success, data science or product management. KEY RESPONSIBILITIES: - Understand the client deliverables from the sales team and come back to them with the timelines of solution / product delivery - Coordinate with relevant stakeholders within the client team to configure the WyngCommerce platform to their business, set-up processes for regular data sharing - Drive relevant anomaly detection analyses and work on data pre-processing to prepare the data for the WyngCommerce Analytics engine - Drive rigorous testing of results from the WyngCommerce engine and apply manual overrides, wherever required before pushing the results to the client - Evaluate business outcomes of different engagements with the client and prepare the analysis for the business benefits to the clients KEY REQUIREMENTS: - 0-2 years of experience in analytics role (preferably client facing which required you to interface with multiple stakeholders) - Hands-on experience in data analysis (descriptive), visualization and data pre-processing - Hands-on experience in python, especially in data processing and visualization libraries like pandas, numpy, matplotlib, seaborn - Good understanding of statistical and predictive modeling concepts (you not need be completely hands-on) - Excellent analytical thinking, and problem solving skills - Experience in project management and handling client communications - Excellent communication (written/verbal) skills, including logically structuring and delivering presentations
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
Description Must have Direct Hands- on, 4 years of experience, building complex Data Science solutions Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling, using Python / R Experience should include the following, File I/ O, Data Harmonization, Data Exploration Machine Learning Techniques (Supervised, Unsupervised) Multi- Dimensional Array Processing Deep Learning NLP, Image Processing Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have excellent written and verbal communication skills Should be a team player with open mind