Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
External Skills And Expertise
Must have Skills:
Good to Have Skills:
Position: ML/AI Engineer (3+ years)
Responsibilities:
Design, develop, and deploy machine learning models with a focus on scalability and performance.
Implement unsupervised algorithms and handle large datasets for inference and analysis.
Collaborate with cross-functional teams to integrate AI/ML solutions into real-world applications.
Work on MLOps frameworks for model lifecycle management (optional but preferred).
Stay updated with the latest advancements in AI/ML technologies.
Requirements:
Proven experience in machine learning model building and deployment.
Hands-on experience with unsupervised learning algorithms (e.g., clustering, anomaly detection).
Proficiency in handling Big Data and related tools/frameworks.
Exposure to MLOps tools like MLflow, Kubeflow, or similar (preferred).
Familiarity with cloud platforms such as AWS, Azure, or GCP.
Location: Baner, Pune
Employment Type: Full-time
Job Description: PEL Data Analyst
Job Title PEL Data Analyst / PEL ADMIN OFFICER
About the Group
Our Client is one of the most reputed service groups in Oman’s construction and mining industry. The organization has grown from a small family
business to one that leads the industry in construction contracting, manufacturing of cement products, building finishes products, roads, asphalt &
infrastructure works and mining amongst other product offerings. The Group has achieved this by basing everything it does on its core values of HSE
& Quality. With a diverse team of over 22,000 employees, The Group endeavours to serve the Sultanate with international quality products &
services to meet the demands of the growing nation.
Purpose of the job
Responsible for all day-to-day activities supporting the improvement of organizational effectiveness by working with extended teams & functions to
develop & implement; high-quality reports, dashboards and performance indicators for the PEL Division.
Key Responsibilities & Activities:
1. Responsible for data modeling and analysis.
2. Responsible for deep data dives and extremely detailed analysis of various data points collected during the need identification phase.
Organize and present data to line management for appropriate guidance and subsequent process mapping.
3. Responsible for rigorous periodic reviews of various individual, functional, business unit, and group metrics and indicators of success.
Through but not limited to – key productivity indicators, result areas, heath meter, performance goals etc. Report results to line
management and support the decision-making process.
4. Responsible for the development of high-quality analytical reports, and training packs.
5. Responsible for all individual and specific departmental productivity targets, financial objectives, KPIs and attainment thereof.
Other tasks:
1. Promoting The Group Values, Code of Conduct and associated policies
2. Participating and providing positive contributions to technical, commercial, planning, safety and quality performance of the organization
and by Client, Contractual, and regulatory requirements
3. Visiting Sites and Project locations to discuss operational aspects and carry out training.
4. Undertaking any other responsibilities as directed and mutually agreed with the Line Management.
The above list is not exhaustive. Individuals may be required to perform additional job-related tasks and duties as assigned.
Educational Qualifications
• Bachelor’s degree in engineering
• Masters (preferred)
Professional Certifications • Certifications related to Data Analyst or Data Scientist (preferred)
Skills
• Advanced Excel, Macros, & MS Power Query and MS Power BI, Google Data Studio (Looker Studio)
• Knowledge of Python, M Code and DAX is a plus
• Data management, Bigdata, Data analysis
• Attention to details and strong analytical skills.
• Strong data management skills
Experience • 4 years position specific experience
• 7 years overall professional experience
Language
• English (fluent)
• Arabic (preferred)
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
Key Skills & Requirements:
Preferred Skills:
Skills:
Experience with Cassandra, including installing configuring and monitoring a Cassandra cluster.
Experience with Cassandra data modeling and CQL scripting. Experience with DataStax Enterprise Graph
Experience with both Windows and Linux Operating Systems. Knowledge of Microsoft .NET Framework (C#, NETCore).
Ability to perform effectively in a team-oriented environment
Experience: 12-15 Years with 7 years in Big Data, Cloud, and Analytics.
Key Responsibilities:
Requirements:
Preferred Qualifications:
Experience: 12-15 Years
Key Responsibilities:
Skills Required:
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
As a Senior Analyst you will play a crucial role in improving customer experience, retention, and growth by identifying opportunities in the moment that matter and surfacing them to teams across functions. You have experience with collecting, stitching and analyzing Voice of Customer and CX data, and are passionate about customer feedback. You will partner with the other Analysts on the team, as well as the Directors, in delivering impactful presentations that drive action across the organization.
You come with an understanding of customer feedback and experience analytics and the ability to tell a story with data. You are a go-getter who can take a request and run with it independently but also does not shy away from collaborating with the other team members. You are flexible and thrive in a fast-paced environment with changing priorities.
Responsibilities
Qualifications
Non-technical requirements:
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
Nice To Have
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
Nice To Have:
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
Must have skills
3 to 6 years
Data Science
SQL, Excel, Big Query - mandate 3+ years
Python/ML, Hadoop, Spark - 2+ years
Requirements
• 3+ years prior experience as a data analyst
• Detail oriented, structural thinking and analytical mindset.
• Proven analytic skills, including data analysis and data validation.
• Technical writing experience in relevant areas, including queries, reports, and presentations.
• Strong SQL and Excel skills with the ability to learn other analytic tools
• Good communication skills (being precise and clear)
• Good to have prior knowledge of python and ML algorithms
Join Our Journey
Jules develops an amazing end-to-end solution for recycled materials traders, importers and exporters. Which means a looooot of internal, structured data to play with in order to provide reporting, alerting and insights to end-users. With about 200 tables, covering all business processes from order management, to payments including logistics, hedging and claims, the wealth the data entered in Jules can unlock is massive.
After working on a simple stack made of PostGres, SQL queries and a visualization solution, the company is now ready to set-up its data stack and only misses you. We are thinking DBT, Redshift or Snowlake, Five Tran, Metabase or Luzmo etc. We also have an AI team already playing around text driven data interaction.
As a Data Engineer at Jules AI, your duties will involve both data engineering and product analytics, enhancing our data ecosystem. You will collaborate with cross-functional teams to design, develop, and sustain data pipelines, and conduct detailed analyses to generate actionable insights.
Roles And Responsibilities:
Mandatory Qualifications:
Grow, Develop, and Thrive With Us
Apply to us directly : https://nyteco.keka.com/careers/jobdetails/41442
Python Data Engineer
Job Description:
• Design, develop, and maintain database scripts and procedures to support
application requirements.
• Collaborate with software developers to integrate database scripts with
application code.
• Troubleshoot and resolve database issues in a timely manner.
• Perform database maintenance tasks, such as backups, restores, and migrations.
• Implement data security measures to protect sensitive information.
• Develop and maintain documentation for database scripts and procedures.
• Stay up-to-date with emerging technologies and best practices in database
management.
Job Requirements:
• Bachelor’s degree in Computer Science, Information Technology, or related field.
• 3+ years of Proven experience as a Database Engineer or similar role with python
• Proficiency in SQL and scripting languages such as Python or Js.
• Strong understanding of database management systems, including relational
databases (e.g., MySQL, PostgreSQL, SQL Server) and NoSQL databases (e.g.,
MongoDB, Cassandra).
• Experience with database design principles and data modelling techniques.
• Knowledge of database optimisation techniques and performance tuning.
• Familiarity with version control systems (e.g., Git) and continuous integration
tools.
• Excellent problem-solving skills and attention to detail.
• Strong communication and collaboration skills.
Job Description:
We are seeking a talented Machine Learning Engineer with expertise in software engineering to join our team. As a Machine Learning Engineer, your primary responsibility will be to develop machine learning (ML) solutions that focus on technology process improvements. Specifically, you will be working on projects involving ML & Generative AI solutions for Technology & Data Management Efficiencies such as optimal cloud computing, knowledge bots, Software Code Assistants, Automatic Data Management etc
Responsibilities:
- Collaborate with cross-functional teams to identify opportunities for technology process improvements that can be solved using machine learning and generative AI.
- Define and build innovate ML and Generative AI systems such as AI Assistants for varied SDLC tasks, and improve Data & Infrastructure management etc.
- Design and develop ML Engineering Solutions, generative AI Applications & Fine-Tuning Large Language Models (LLMs) for above ensuring scalability, efficiency, and maintainability of such solutions.
- Implement prompt engineering techniques to fine-tune and enhance LLMs for better performance and application-specific needs.
- Stay abreast of the latest advancements in the field of Generative AI and actively contribute to the research and development of new ML & Generative AI Solutions.
Requirements:
- A Master's or Ph.D. degree in Computer Science, Statistics, Data Science, or a related field.
- Proven experience working as a Software Engineer, with a focus on ML Engineering and exposure to Generative AI Applications such as chatGPT.
- Strong proficiency in programming languages such as Java, Scala, Python, Google Cloud, Biq Query, Hadoop & Spark etc
- Solid knowledge of software engineering best practices, including version control systems (e.g., Git), code reviews, and testing methodologies.
- Familiarity with large language models (LLMs), prompt engineering techniques, vector DB's, embedding & various fine-tuning techniques.
- Strong communication skills to effectively collaborate and present findings to both technical and non-technical stakeholders.
- Proven ability to adapt and learn new technologies and frameworks quickly.
- A proactive mindset with a passion for continuous learning and research in the field of Generative AI.
If you are a skilled and innovative Data Scientist with a passion for Generative AI, and have a desire to contribute to technology process improvements, we would love to hear from you. Join our team and help shape the future of our AI Driven Technology Solutions.
Radisys Corporation, a global leader in open telecom solutions, enables service providers to drive disruption with new open architecture business models. Our innovative technology solutions leverage open reference architectures and standards, combined with open software and hardware, to power business transformation for the telecom industry. Our services organization delivers systems integration expertise necessary to solve complex deployment challenges for communications and content providers.
Job Overview :
We are looking for a Lead Engineer - Java with a strong background in Java development and hands-on experience with J2EE, Springboot, Kubernetes, Microservices, NoSQL, and SQL. As a Lead Engineer, you will be responsible for designing and developing high-quality software solutions and ensuring the successful delivery of projects. role with 7 to 10 years of experience, based in Bangalore, Karnataka, India. This position is a full-time role with excellent growth opportunities.
Qualifications and Skills :
- Bachelor's or master's degree in Computer Science or a related field
- Strong knowledge of Core Java, J2EE, and Springboot frameworks
- Hands-on experience with Kubernetes and microservices architecture
- Experience with NoSQL and SQL databases
- Proficient in troubleshooting and debugging complex system issues
- Experience in Enterprise Applications
- Excellent communication and leadership skills
- Ability to work in a fast-paced and collaborative environment
- Strong problem-solving and analytical skills
Roles and Responsibilities :
- Work closely with product management and cross-functional teams to define requirements and deliverables
- Design scalable and high-performance applications using Java, J2EE, and Springboot
- Develop and maintain microservices using Kubernetes and containerization
- Design and implement data models using NoSQL and SQL databases
- Ensure the quality and performance of software through code reviews and testing
- Collaborate with stakeholders to identify and resolve technical issues
- Stay up-to-date with the latest industry trends and technologies
Responsibilities -
Qualifications & Requirements -
Role & Responsibilities
- Apache Kafka for distributed event streaming.
- Apache Spark for large-scale data processing.
- Containers for scalable and portable deployments.
Technical Skills:
Bachelor’s Degree in Information Technology or related field desirable.
• 5 years of Database administrator experience in Microsoft technologies
• Experience with Azure SQL in a multi-region configuration
• Azure certifications (Good to have)
• 2+ Years’ Experience in performing data migrations upgrades/modernizations, performance tuning on IaaS and PaaS Managed Instance and SQL Azure
• Experience with routine maintenance, recovery, and handling failover of a databases
Knowledge about the RDBMS e.g., Microsoft SQL Server or Azure cloud platform.
• Expertise Microsoft SQL Server on VM, Azure SQL Managed Instance, Azure SQL
• Experience in setting up and working with Azure data warehouse.
Job Title Big Data Developer
Job Description
Bachelor's degree in Engineering or Computer Science or equivalent OR Master's in Computer Applications or equivalent.
Solid Experience of software development experience and leading teams of engineers and scrum teams.
4+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark).
Solid Datawarehousing concepts.
Knowledge of Financial reporting ecosystem will be a plus.
4+ years of experience within Data Engineering/ Data Warehousing using Big Data technologies will be an addon.
Expert on Distributed ecosystem.
Hands-on experience with programming using Core Java or Python/Scala
Expert on Hadoop and Spark Architecture and its working principle
Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data.
Experience in UNIX shell scripting.
Roles & Responsibilities
Ability to design and develop optimized Data pipelines for batch and real time data processing
Should have experience in analysis, design, development, testing, and implementation of system applications
Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows.
Excellent technical and analytical aptitude
Good communication skills.
Excellent Project management skills.
Results driven Approach.
Mandatory SkillsBig Data, PySpark, Hive
Minimum of 8 years of experience of which, 4 years should be of applied data mining
experience in disciplines such as Call Centre Metrics.
Strong experience in advanced statistics and analytics including segmentation, modelling, regression, forecasting etc.
Experience with leading and managing large teams.
Demonstrated pattern of success in using advanced quantitative analytic methods to solve business problems.
Demonstrated experience with Business Intelligence/Data Mining tools to work with
data, investigate anomalies, construct data sets, and build models.
Critical to share details on projects undertaken (preferably on telecom industry)
specifically through analysis from CRM.
Role: Oracle DBA Developer
Location: Hyderabad
Required Experience: 8 + Years
Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration
cid:image001.png@01DA22CD.2A106AD0
Roles and Responsibilities:
1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra
2. DBA experience in a SRE environment will be an advantage.
3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)
4. Analyze solutions and implement best practices for cloud database and their components.
5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.
7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.
8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)
9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.
10. Recommend query and schema changes to optimize the performance of database queries.
11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.
12. Have experience with cloud database such as SQL server, Oracle, Cassandra
13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)
14. Have excellent written and verbal English communication skills.
15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.
16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.
17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.
18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.
19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.
20. Bachelor's Degree in a technical discipline required.
21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)
21. Cloud, DevOps certifications will be an advantage.
Must have Skills:
Ø Oracle DBA with development
Ø SQL
Ø Devops tools
Ø Cassandra
About Us:
6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do
sales and marketing. It works with big data at scale, advanced machine learning and
predictive modelling to find buyers and predict what they will purchase, when and
how much.
6sense helps B2B marketing and sales organizations fully understand the complex ABM
buyer journey. By combining intent signals from every channel with the industry’s most
advanced AI predictive capabilities, it is finally possible to predict account demand and
optimize demand generation in an ABM world. Equipped with the power of AI and the
6sense Demand PlatformTM, marketing and sales professionals can uncover, prioritize,
and engage buyers to drive more revenue.
6sense is seeking a Staff Software Engineer and data to become part of a team
designing, developing, and deploying its customer-centric applications.
We’ve more than doubled our revenue in the past five years and completed our Series
E funding of $200M last year, giving us a stable foundation for growth.
Responsibilities:
1. Own critical datasets and data pipelines for product & business, and work
towards direct business goals of increased data coverage, data match rates, data
quality, data freshness
2. Create more value from various datasets with creative solutions, and unlocking
more value from existing data, and help build data moat for the company3. Design, develop, test, deploy and maintain optimal data pipelines, and assemble
large, complex data sets that meet functional and non-functional business
requirements
4. Improving our current data pipelines i.e. improve their performance, SLAs,
remove redundancies, and figure out a way to test before v/s after roll out
5. Identify, design, and implement process improvements in data flow across
multiple stages and via collaboration with multiple cross functional teams eg.
automating manual processes, optimising data delivery, hand-off processes etc.
6. Work with cross function stakeholders including the Product, Data Analytics ,
Customer Support teams for their enablement for data access and related goals
7. Build for security, privacy, scalability, reliability and compliance
8. Mentor and coach other team members on scalable and extensible solutions
design, and best coding standards
9. Help build a team and cultivate innovation by driving cross-collaboration and
execution of projects across multiple teams
Requirements:
8-10+ years of overall work experience as a Data Engineer
Excellent analytical and problem-solving skills
Strong experience with Big Data technologies like Apache Spark. Experience with
Hadoop, Hive, Presto would-be a plus
Strong experience in writing complex, optimized SQL queries across large data
sets. Experience with optimizing queries and underlying storage
Experience with Python/ Scala
Experience with Apache Airflow or other orchestration tools
Experience with writing Hive / Presto UDFs in Java
Experience working on AWS cloud platform and services.
Experience with Key Value stores or NoSQL databases would be a plus.
Comfortable with Unix / Linux command line
Interpersonal Skills:
You can work independently as well as part of a team.
You take ownership of projects and drive them to conclusion.
You’re a good communicator and are capable of not just doing the work, but also
teaching others and explaining the “why” behind complicated technical
decisions.
You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll
want you to evolve with it
Lead Data Engineer
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
Job responsibilities
· You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems
· You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges
· You will collaborate with Data Scientists in order to design scalable implementations of their models
· You will pair to write clean and iterative code based on TDD
· Leverage various continuous delivery practices to deploy, support and operate data pipelines
· Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
· Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
· Create data models and speak to the tradeoffs of different modeling approaches
· On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product
· Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
· Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications Technical skills
· You are equally happy coding and leading a team to implement a solution
· You have a track record of innovation and expertise in Data Engineering
· You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations
· You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
· You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
· Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
· You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
· You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
· Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems
Professional skills
· Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers
· You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
· An interest in coaching others, sharing your experience and knowledge with teammates
· You enjoy influencing others and always advocate for technical excellence while being open to change when needed
● Work in a Business Process Outsourcing (BPO) Module providing marketing solutions to different local and international clients and Business Development Units.
● Content Production: Creating content to support demand generation initiatives, and grow brand awareness in a competitive category of an online casino company.
● Writing content for different marketing channels (such as website, blogs, thought leadership pieces, social media, podcasts, webinar, etc.) as assigned to effectively reach the desired target players and marketing goals.
● Data Analysis: Analyze player data to identify trends, improve the player experience, and make data-driven decisions for the casino's operations.
● Research and use AI-based tools to improve and speed up content creation processes.
● Researching content and consumer trends to ensure that content is relevant and appealing.
● Help develop and participate in market research for the purposes of thought leadership content production and opportunities, and competitive intelligence for content marketing.
● Security: Maintain a secure online environment, including protecting player data and preventing cyberattacks.
● Managing content calendars (and supporting calendar management) and ensuring the content you write is consistent with brand standards and meets the brief as-assigned.
● Coordinating with project manager / content manager to ensure the timely delivery of assignments.
● Keeping up to date with content trends, consumer preferences, and advancements in technology.
● Reporting: Generate regular reports on key performance indicators, financial metrics, and operational data to assess the casino's performance.
The specific responsibilities and requirements for a Marketing Content Supervisor/Manager in an online casino may vary depending on the size and nature of the casino, as well as local regulations and industry standards.
Salary
Php80, 000 - Php100, 000
INR 117,587.- 146,960
Work Experience Requirements
Essential Qualifications
● Excellent research, writing, editing, proofreading, content creation and communication skills.
● Proficiency/experience in formulating corporate/brand/product messaging.
● Strong understanding of SEO and content practices.
● Proficiency in MS Office, Zoom, Slack, marketing platforms related to creative content creation/ project management/ workflow.
● Content writing / copywriting portfolio demonstrating scope of content/copy writing capabilities and application of writing and SEO best practices.
● Highly motivated, self-starter, able to prioritize projects, accept responsibility and follow through without close supervision on every step.
● Demonstrated strong analytical skills with an action-oriented mindset focused on data-driven results.
● Experience in AI-based content creation tools is a plus. Openness to research and use AI tools required.
● Passion for learning and self-improvement.
● Detail-oriented team player with a positive attitude.
● Ability to embrace change and love working in dynamic, growing environments.
● Experience with research, content production, writing on-brand and turning thought pieces into multiple content assets by simplifying complex concepts preferred.
● Ability to keep abreast of content trends and advancements in content strategies and technologies.
● On-camera or on-mic experience or desire to speak and present preferred.
● Must be willing to report onsite in Cambodia
Looking for freelance?
We are seeking a freelance Data Engineer with 7+ years of experience
Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems
What we are looking for
We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes
Experience:
Should have a minimum of 10-12 years of Experience.
Should have experience on Product Development/Maintenance/Production Support experience in a support organization
Should have a good understanding of services business for fortune 1000 from the operations point of view
Ability to read, understand and communicate complex technical information
Ability to express ideas in an organized, articulate and concise manner
Ability to face stressful situation with positive attitude
Any certification in regards to support services will be an added advantage
Education: BE, B- Tech (CS), MCA
Location: India
Primary Skills:
Hands on experience with OpenStack framework. Ability to set up private cloud using OpenStack environment. Awareness to various OpenStack services and modules
Strong experience with OpenStack services like Neutron, Cinder, Keystone, etc.
Proficiency in programming languages such as Python, Ruby, or Go.
Strong knowledge of Linux systems administration and networking.
Familiarity with virtualization technologies like KVM or VMware.
Experience with configuration management and IaC tools like Ansible, Terraform.
Subject matter expertise in OpenStack security
Solid experience with Linux and shell scripting
Sound knowledge of cloud computing concepts & technologies, such as docker, Kubernetes, AWS, GCP, Azure etc.
Ability to configure OpenStack environment for optimum resources
Good knowledge of security, operations in open stack environment
Strong knowledge of Linux internals, networking, storage, security
Strong knowledge of VMware Enterprise products (ESX, vCenter)
Hands on experience with HEAT orchestration
Experience with CI/CD, monitoring, operational aspects
Strong experience working with Rest API's, JSON
Exposure to Big data technologies ( Messaging queues, Hadoop/MPP, NoSQL databases)
Hands on experience with open source monitoring tools like Grafana/Prometheus/Nagios/Ganglia/Zabbix etc.
Strong verbal and written communication skills are mandatory
Excellent analytical and problem solving skills are mandatory
Role & Responsibilities
Advise customers and colleagues on cloud and virtualization topics
Work with the architecture team on cloud design projects using openstack
Collaborate with product, customer success, and presales on customer projects
Participate in onsite assessments and workshops when requested
Provide subject matter expertise and mentor colleagues
Set up open stack environments for projects
Design, deploy, and maintain OpenStack infrastructure.
Collaborate with cross-functional chapters to integrate OpenStack with other services (k8s, DBaaS)
Develop automation scripts and tools to streamline OpenStack operations.
Troubleshoot and resolve issues related to OpenStack services.
Monitor and optimize the performance and scalability of OpenStack components.
Stay updated with the latest OpenStack releases and contribute to the OpenStack community.
Work closely with Architects and Product Management to understand requirement
should be capable of working independently & responsible for end-to-end implementation
Should work with complete ownership and handle all issues without missing SLA's
Work closely with engineering team and support team
Should be able to debug the issues and report appropriately in the ticketing system
Contribute to improve the efficiency of the assignment by quality improvements & innovative suggestions
Should be able to debug/create scripts for automation
Should be able to configure monitoring utilities & set up alerts
Should be hands on in setting up OS, applications, databases and have passion to learn new technologies
Should be able to scan logs, errors, exception and get to the root cause of the issue
Contribute in developing a knowledge base on collaboration with other team members
Maintain customer loyalty through Integrity and accountability
Groom and mentor team members on project technologies and work
Required a full stack Senior SDE with focus on Backend microservices/ modular monolith with 3-4+ years of experience on the following:
This is one of the early positions for scaling up the Technology team. So culture-fit is really important.
About Quadratyx:
We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.
We firmly believe in Excellence Everywhere.
Job Description
Purpose of the Job/ Role:
• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.
Key Requisites:
• Expertise in Data structures and algorithms.
• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.
• Scaling of cloud-based infrastructure.
• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.
• Led and mentored a team of data engineers.
• Hands-on experience in test-driven development (TDD).
• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.
• Good knowledge of Kafka and Spark Streaming internal architecture.
• Good knowledge of any Application Servers.
• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.
• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc.
Skills/ Competencies Required
Technical Skills
• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.
• Clear end-to-end experience in designing, programming, and implementing large software systems.
• Passion and analytical abilities to solve complex problems Soft Skills.
• Always speaking your mind freely.
• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.
• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.
Academic Qualifications & Experience Required
Required Educational Qualification & Relevant Experience
• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.
• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.
Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA
Responsibilities:
Requirements:
Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.
Qualifications & Experience:
▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design
▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.
▪ Expert in SQL, worked on advanced SQL for at least 2+ years
▪ Good development skills in Java, Python or other languages
▪ Experience with EMR, S3
▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview
▪ Comfortable working in an agile environment
Role: Principal Software Engineer
We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.
Responsibilities:
• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule
• Software Development that creates data driven intelligence in the products which deals with Big Data backends
• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements
• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development
• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)
• Creating metrics and evaluation of algorithm for better accuracy and recall
• Ensuring efficient access and usage of data through the means of indexing, clustering etc.
• Collaborate with engineering and product development teams.
Requirements:
• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school
• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.
• Experience of 8 to 10 year with product development, having done algorithmic work
• 5+ years of experience working with large data sets or do large scale quantitative analysis
• Understanding of SaaS based products and services.
• Strong algorithmic problem-solving skills
• Able to mentor and manage team and take responsibilities of team deadline.
Skill set required:
• In depth Knowledge Python programming languages
• Understanding of software architecture and software design
• Must have fully managed a project with a team
• Having worked with Agile project management practices
• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)
• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis
Minimum 4 to 10 years of experience in testing distributed backend software architectures/systems.
• 4+ years of work experience in test planning and automation of enterprise software
• Expertise in programming using Java or Python and other scripting languages.
• Experience with one or more public clouds is expected.
• Comfortable with build processes, CI processes, and managing QA Environments as well as working with build management tools like Git, and Jenkins
. • Experience with performance and scalability testing tools.
• Good working knowledge of relational databases, logging, and monitoring frameworks is expected.
Familiarity with system flow like how they interact with an application Eg. Elasticsearch, Mongo, Kafka, Hive, Redis, AWS
RequiredSkills:
• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.
• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.
KEY RESPONSIBILITIES
EDUCATION & SKILLS REQUIREMENT
Job description
Primary / Mandatory skills:
We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:
Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
What we are looking for:
● 3+ years’ experience developing Data & Analytic solutions
● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark
● Experience with relational SQL
● Experience with scripting languages such as Shell, Python
● Experience with source control tools such as GitHub and related dev process
● Experience with workflow scheduling tools such as Airflow
● In-depth knowledge of scalable cloud
● Has a passion for data solutions
● Strong understanding of data structures and algorithms
● Strong understanding of solution and technical design
● Has a strong problem-solving and analytical mindset
● Experience working with Agile Teams.
● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
● Able to quickly pick up new programming languages, technologies, and frameworks
● Bachelor’s Degree in computer science
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
Requirements:
● Understanding our data sets and how to bring them together.
● Working with our engineering team to support custom solutions offered to the product development.
● Filling the gap between development, engineering and data ops.
● Creating, maintaining and documenting scripts to support ongoing custom solutions.
● Excellent organizational skills, including attention to precise details
● Strong multitasking skills and ability to work in a fast-paced environment
● 5+ years experience with Python to develop scripts.
● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]
● You are familiar with pulling and pushing files from SFTP and AWS S3.
● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.
● Familiarity with SQL programming to query and transform data from relational Databases.
● Familiarity to work with Linux (and Linux work environment).
● Excellent written and verbal communication skills
● Extracting, transforming, and loading data into internal databases and Hadoop
● Optimizing our new and existing data pipelines for speed and reliability
● Deploying product build and product improvements
● Documenting and managing multiple repositories of code
● Experience with SQL and NoSQL databases (Casendra, MySQL)
● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,
RedShift, Athena)
● Hands-on experience in AirFlow
● Understanding of best practices, common coding patterns and good practices around
● storing, partitioning, warehousing and indexing of data
● Experience in reading the data from Kafka topic (both live stream and offline)
● Experience in PySpark and Data frames
Responsibilities:
You’ll
● Collaborating across an agile team to continuously design, iterate, and develop big data systems.
● Extracting, transforming, and loading data into internal databases.
● Optimizing our new and existing data pipelines for speed and reliability.
● Deploying new products and product improvements.
● Documenting and managing multiple repositories of code.
Why LiftOff?
We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.
Many on the team are serial entrepreneurs with a history of successful exits.
As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.
About the Role
If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craft, LiftOff is a great place for you.
Please Note that this is for a Consultant Role
Candidates who are okay with freelancing/Part-time can apply
We are looking for a Big Data Engineer with java for Chennai Location
Location : Chennai
Exp : 11 to 15 Years
Job description
Required Skill:
1. Candidate should have minimum 7 years of experience as total
2. Candidate should have minimum 4 years of experience in Big Data design and development
3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python
4. Candidate should have experience in any RDBMS.
Roles & Responsibility:
1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.
2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.
3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.
4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation
Regards,
Priyanka S
7P8R9I9Y4A0N8K8A7S7
Data Engineer- Senior
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What are you going to do?
Design & Develop high performance and scalable solutions that meet the needs of our customers.
Closely work with the Product Management, Architects and cross functional teams.
Build and deploy large-scale systems in Java/Python.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.
Follow best practices that can be adopted in Bigdata stack.
Use your engineering experience and technical skills to drive the features and mentor the engineers.
What are we looking for ( Competencies) :
Bachelor’s degree in computer science, computer engineering, or related technical discipline.
Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.
Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.
Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.
Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.
Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.
Ability to work with distributed teams in a collaborative and productive manner.
Benefits:
Competitive Salary Packages and benefits.
Collaborative, lively and an upbeat work environment with young professionals.
Job Category: Development
Job Type: Full Time
Job Location: Bangalore
Roles and Responsibilities