We develop and publish mobile games that millions love. With global hits and 450 million+ downloads, we're the UK's biggest hypercasual games company!
It is the leader in capturing technographics-powered buying intent, helpscompanies uncover the 3% of active buyers in their target market. It evaluatesover 100 billion data points and analyzes factors such as buyer journeys, technologyadoption patterns, and other digital footprints to deliver market & sales intelligence.Its customers have access to the buying patterns and contact information ofmore than 17 million companies and 70 million decision makers across the world.Role – Data EngineerResponsibilities Work in collaboration with the application team and integration team todesign, create, and maintain optimal data pipeline architecture and datastructures for Data Lake/Data Warehouse. Work with stakeholders including the Sales, Product, and Customer Supportteams to assist with data-related technical issues and support their dataanalytics needs. Assemble large, complex data sets from third-party vendors to meet businessrequirements. Identify, design, and implement internal process improvements: automatingmanual processes, optimizing data delivery, re-designing infrastructure forgreater scalability, etc. Build the infrastructure required for optimal extraction, transformation, andloading of data from a wide variety of data sources using SQL, Elasticsearch,MongoDB, and AWS technology. Streamline existing and introduce enhanced reporting and analysis solutionsthat leverage complex data sources derived from multiple internal systems.Requirements 5+ years of experience in a Data Engineer role. Proficiency in Linux. Must have SQL knowledge and experience working with relational databases,query authoring (SQL) as well as familiarity with databases including Mysql,Mongo, Cassandra, and Athena. Must have experience with Python/Scala. Must have experience with Big Data technologies like Apache Spark. Must have experience with Apache Airflow. Experience with data pipeline and ETL tools like AWS Glue. Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Greetings.. We have urgent requirement for the post of Big Data Architect in reputed MNC company Location: Pune/Nagpur,Goa,Hyderabad/Bangalore Job Requirements: 9 years and above of total experience preferably in bigdata space. Creating spark applications using Scala to process data. Experience in scheduling and troubleshooting/debugging Spark jobs in steps. Experience in spark job performance tuning and optimizations. Should have experience in processing data using Kafka/Pyhton. Individual should have experience and understanding in configuring Kafka topics to optimize the performance. Should be proficient in writing SQL queries to process data in Data Warehouse. Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks. Experience on AWS services like EMR.
Responsibilities:● Ability to do exploratory analysis: Fetch data from systems and analyze trends.● Developing customer segmentation models to improve the efficiency of marketing and productcampaigns.● Establishing mechanisms for cross functional teams to consume customer insights to improveengagement along the customer life cycle.● Gather requirements for dashboards from business, marketing and operations stakeholders.● Preparing internal reports for executive leadership and supporting their decision making.● Analyse data, derive insights and embed it into Business actions.● Work with cross functional teams.Skills Required• Data Analytics Visionary.• Strong in SQL & Excel and good to have experience in Tableau.• Experience in the field of Data Analysis, Data Visualization.• Strong in analysing the Data and creating dashboards.• Strong in communication, presentation and business intelligence.• Multi-Dimensional, "Growth Hacker" Skill Set with strong sense of ownership for work.• Aggressive “Take no prisoners” approach.
Primary Responsibilities:• Responsible for developing and maintaining applications with PySpark• Contribute to the overall design and architecture of the application developed and deployed.• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.• Interact with business users to understand requirements and troubleshoot issues.• Implement Projects based on functional specifications.Must-Have Skills:• Good experience in Pyspark - Including Dataframe core functions and Spark SQL• Good customer communication.• Good Analytical skills
Job Description:• Help build a Data Science team which will be engaged in researching, designing,implementing, and deploying full-stack scalable data analytics vision and machine learningsolutions to challenge various business issues.• Modelling complex algorithms, discovering insights and identifying businessopportunities through the use of algorithmic, statistical, visualization, and mining techniques• Translates business requirements into quick prototypes and enable thedevelopment of big data capabilities driving business outcomes• Responsible for data governance and defining data collection and collationguidelines.• Must be able to advice, guide and train other junior data engineers in their job.Must Have:• 4+ experience in a leadership role as a Data Scientist• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)• Willing to work from scratch and build up a team of Data Scientists• Open for taking up the challenges with end to end ownership• Confident with excellent communication skills along with a good decision maker
Work-days: Sunday through ThursdayWork shift: Day time Strong problem-solving skills with an emphasis on product development. • Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and drawinsights from large data sets.• Experience in building ML pipelines with Apache Spark, Python• Proficiency in implementing end to end Data Science Life cycle• Experience in Model fine-tuning and advanced grid search techniques• Experience working with and creating data architectures.• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neuralnetworks, etc.) and their real-world advantages/drawbacks.• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,statistical tests and proper usage, etc.) and experience with applications.• Excellent written and verbal communication skills for coordinating across teams.• A drive to learn and master new technologies and techniques.• Assess the effectiveness and accuracy of new data sources and data gathering techniques.• Develop custom data models and algorithms to apply to data sets.• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.• Develop company A/B testing framework and test model quality.• Coordinate with different functional teams to implement models and monitor outcomes.• Develop processes and tools to monitor and analyze model performance and data accuracy.Key skills:● Strong knowledge in Data Science pipelines with Python● Object-oriented programming● A/B testing framework and model fine-tuning● Proficiency in using sci-kit, NumPy, and pandas package in pythonNice to have:● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes● Unit testing, Test-driven development practice● DevOps, Continuous integration/ continuous deployment experience● Agile development environment experience, familiarity with SCRUM● Deep learning knowledge
Job Title: Data Engineer (Remote) Job Description You will work on: We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights. What you will do (Responsibilities): Collaborate with Business, Marketing & CRM teams to build highly efficient data pipleines. You will be responsible for: Dealing with customer data and building highly efficient pipelines Building insights dashboards Troubleshooting data loss, data inconsistency, and other data-related issues Maintaining backend services (written in Golang) for metadata generation Providing prompt support and solutions for Product, CRM, and Marketing partners What you bring (Skills): 2+ year of experience in data engineering Coding experience with one of the following languages: Golang, Java, Python, C++ Fluent in SQL Working experience with at least one of the following data-processing engines: Flink,Spark, Hadoop, Hive Great if you know (Skills): T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome. Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Flink, Hadoop, Hive, Storm, etc. Advantage Cognologix: Higher degree of autonomy, startup culture & small teams Opportunities to become expert in emerging technologies Remote working options for the right maturity level Competitive salary & family benefits Performance based career advancement About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way. Skills: JAVA, PYTHON, HADOOP, HIVE, SPARK PROGRAMMING, KAFKA Thanks & regards, Cognologix- HR Dept.
Proficient in R and Python Work experience 1+ years with at least 6 months working with Python Prior experience with building ML models Prior experience with SQL Knowledge of statistical techniques Experience with working on Spatial Data will be an added advantage
Role and Responsibilities Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality Build robust RESTful APIs that serve data and insights to DataWeave and other products Design user interaction workflows on our products and integrating them with data APIs Help stabilize and scale our existing systems. Help design the next generation systems. Scale our back end data and analytics pipeline to handle increasingly large amounts of data. Work closely with the Head of Products and UX designers to understand the product vision and design philosophy Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns. Constantly think scale, think automation. Measure everything. Optimize proactively. Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Skills and Requirements 8- 15 years of experience building and scaling APIs and web applications. Experience building and managing large scale data/analytics systems. Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices. Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python. Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on. Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’. Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic. Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ. Use the command line like a pro. Be proficient in Git and other essential software development tools. Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus. Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc. Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies. Working knowledge linux server administration as well as the AWS ecosystem is desirable. It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - Jia@TalentSculpt.com. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.