In 2018-19, the mobile games market in India generated over $600 million in revenues. With close to 450 people in its Mumbai and Bangalore offices, Games24x7 is India’s largest mobile games business today and is very well positioned to become the 800-pound gorilla of what will be a $2 billion market by 2022. While Games24x7 continues to invest aggressively in its India centric mobile games, it is also diversifying its business by investing in international gaming and other tech opportunities.
Summary of Role
Position/Role Description :
The candidate will be part of a team managing databases (MySQL, MongoDB, Cassandra) and will be involved in designing, configuring and maintaining databases.
Job Responsibilities:
• Complete involvement in the database requirement starting from the design phase for every project.
• Deploying required database assets on production (DDL, DML)
• Good understanding of MySQL Replication (Master-slave, Master-Master, GTID-based)
• Understanding of MySQL partitioning.
• A better understanding of MySQL logs and Configuration.
• Ways to schedule backup and restoration.
• Good understanding of MySQL versions and their features.
• Good understanding of InnoDB-Engine.
• Exploring ways to optimize the current environment and also lay a good platform for new projects.
• Able to understand and resolve any database related production outages.
Job Requirements:
• BE/B.Tech from a reputed institute
• Experience in python scripting.
• Experience in shell scripting.
• General understanding of system hardware.
• Experience in MySQL is a must.
• Experience in MongoDB, Cassandra, Graph db will be preferred.
• Experience with Pecona MySQL tools.
• 6 - 8 years of experience.
Job Location: Bengaluru
About Play Games24x7 Private Limited
Similar jobs
Data Analyst(Python, ETL & SQL)
at Top Management Consulting Company
python, sql, etl (building etl pipelines)
EXPERIENCE : 3 -7 years of experience in Data Engineering or Data Warehousing
LOCATION: Bangalore and Gurugram
COMPANY DESCRIPTION:
- Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
- 3+ years of professional work experience with a reputed analytics firm
- Expertise in handling large amount of data through Python
- Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
- Experience of deploying ETL / data pipelines and workflows in cloud techhnologies and architecture such as Azure and Amazon Web Services will be valued
- Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.)and software development principles (e.g. modularization, testing, refactoring, etc.)
- A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
- Track record of strong problemsolving, requirement gathering, and leading by example
- Ability to thrive in a flexible and collaborative environment
- Track record of completing projects successfully on time, within budget and as per scope
Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools.
Experience with Data Management & data warehouse development
Star schemas, Data Vaults, RDBMS, and ODS
Change Data capture
Slowly changing dimensions
Data governance
Data quality
Partitioning and tuning
Data Stewardship
Survivorship
Fuzzy Matching
Concurrency
Vertical and horizontal scaling
ELT, ETL
Spark, Hadoop, MPP, RDBMS
Experience with Dev/OPS architecture, implementation and operation
Hand's on working knowledge of Unix/Linux
Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue.
Complex ETL program design coding
Experience in Shell Scripting, Batch Scripting.
Good communication (oral & written) and inter-personal skills
Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval.
Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery.
Propose good design & solutions and adherence to the best Design & Standard practices.
Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks.
Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques.
Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies.
Work with functional business analysts to ensure that application programs are functioning as defined.
Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence.
Technologies (Select based on requirement)
Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift
Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory
Utilities for bulk loading and extracting
Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala
J/ODBC, JSON
Data Virtualization Data services development
Service Delivery - REST, Web Services
Data Virtualization Delivery – Denodo
ELT, ETL
Cloud certification Azure
Complex SQL Queries
Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
- Banking Domain
- Assist the team in building Machine learning/AI/Analytics models on open-source stack using Python and the Azure cloud stack.
- Be part of the internal data science team at fragma data - that provides data science consultation to large organizations such as Banks, e-commerce Cos, Social Media companies etc on their scalable AI/ML needs on the cloud and help build POCs, and develop Production ready solutions.
- Candidates will be provided with opportunities for training and professional certifications on the job in these areas - Azure Machine learning services, Microsoft Customer Insights, Spark, Chatbots, DataBricks, NoSQL databases etc.
- Assist the team in conducting AI demos, talks, and workshops occasionally to large audiences of senior stakeholders in the industry.
- Work on large enterprise scale projects end-to-end, involving domain specific projects across banking, finance, ecommerce, social media etc.
- Keen interest to learn new technologies and latest developments and apply them to projects assigned.
Desired Skills |
- Professional Hands-on coding experience in python for over 1 year for Data scientist, and over 3 years for Sr Data Scientist.
- This is primarily a programming/development-
oriented role - hence strong programming skills in writing object-oriented and modular code in python and experience of pushing projects to production is important. - Strong foundational knowledge and professional experience in
- Machine learning, (Compulsory)
- Deep Learning (Compulsory)
- Strong knowledge of At least One of : Natural Language Processing or Computer Vision or Speech Processing or Business Analytics
- Understanding of Database technologies and SQL. (Compulsory)
- Knowledge of the following Frameworks:
- Scikit-learn (Compulsory)
- Keras/tensorflow/pytorch (At least one of these is Compulsory)
- API development in python for ML models (good to have)
- Excellent communication skills.
- Excellent communication skills are necessary to succeed in this role, as this is a role with high external visibility, and with multiple opportunities to present data science results to a large external audience that will include external VPs, Directors, CXOs etc.
- Hence communication skills will be a key consideration in the selection process.
Fresher Data Engineer- Python+SQL (Internship+Job Opportunity)
Responsibilities for Data Engineer
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Python Developer
at Reval Analytical Services Pvt Ltd
Position Name: Software Developer
Required Experience: 3+ Years
Number of positions: 4
Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science).
Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting
Compensation - As per industry standards.
Joining - Immediate joining is preferrable.
Required Skills:
- Strong Experience in Python and web frameworks like Django, Tornado and/or Flask
- Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib
- Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot
- Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift.
- Experience in building APIs, Multi-threading for tasks on Linux platform
- Exposure to finance and capital markets will be added advantage.
- Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
- Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application.
- Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
- Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3
Company Description:
Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.
Website: http://www.virtuaresearch.com" target="_blank">www.virtuaresearch.com
Machine Learning Engineer
at Deemsoft