
We are looking for a Smart Product Marketer Intern or Consultant who can own the marketing responsibilities for our software products.
Skills Required
1. Marketing skills
- Drafting Marketing Plans,
- Growth Hacking
- A/B testing
- Landing page creation(HTML)
- Content Creation
- Understanding Google Analytics
- Google Ads.
2. Content
- Emails
- Content for Website
- App Store
- PlayStore
- SEO
Roles and Responsibilities
- Take the product to new heights to increase our revenue.
- Understanding the competitors, market, user problems, customer requirements, market trends.
- Collect and analyze data and create reports to help assess the right moves to capture the target market.
- Assist the assigned Manager and the team.
-
Brainstorm ideas with the team for planning the right strategy for the Product.
About Us:
Spacenos is the fastest-growing start-up which is innovating in the finance, edtech and marketing domain since 2015 and won multiple awards and recognitions from more than 40+ MNCs and Fortune 500 companies. Our Clients are based out of the U.S.A and Australia. We are funded & Supported by Government of Karnataka, Angel Investors and International Grants.
Hiring Process:
- Apply for your CV and past work to be reviewed.
- Receive a telephonic interview or assessment upon filling the final step form.
- Receive offer letter if selected.
Hiring Duration:
Our hiring process takes less than 24 hours from the time you receive the Final Step form.
Validity: Up to Dec 2023
- Apply soon, the earliest applicant would be preferred over the late applicants.

About Spacenos
About
Connect with the team
Similar jobs
APPLY IF YOU ARE EXPERIENCE IN COMMERCIAL REAL ESTATE BUSINESS
Responsible for identifying new business opportunities, building client relationships, and increasing company revenue. The role focuses on lead generation, client acquisition, and expanding the company’s market presence.
Key Responsibilities:
- Identify and develop new business opportunities through calls, meetings, and networking
- Generate leads and convert them into long-term clients
- Meet potential clients to present company services or products
- Maintain strong relationships with existing customers
- Achieve monthly and quarterly sales targets
- Prepare proposals, quotations, and business presentations
- Conduct market research and analyze competitor activities
- Coordinate with internal teams for successful service delivery
- Maintain sales reports, pipeline data, and CRM updates
NOTE
ONLY COIMBATORE CANDIDATES.
Storytelling platforms have not evolved with smartphones, as we still consume stories being passive observers. At Raidenbit, we are building http://onelink.to/kahaniboxapp">KahaniBox - The Netflix for interactive fiction in Indian languages. We create stories with immersive roleplay experiences that allow the audience to control how the stories unfold and end. And while doing that, we are solving some fascinating problems around decoding human psychology and understanding our audience’s content tastes using engineering and data science.
We are looking for experienced Full Stack Developers - the ones who would help us in making KahaniBox the largest interactive fiction platform globally. You would be maintaining/fixing/improving our existing and new front-end and back-end solutions while working 100% remotely in flexible timings.
Required at least 2+ years of experience in -
- HTML/CSS
- JavaScript (ES6)
- Vue.js (or an equivalent frontend framework, e.g. React/Angular)
- Node.js
- Google Cloud Platform (or any other e.g. AWS/Azure, etc).
Ideal candidate will also be experienced in
- BAAS (Firebase or equivalent) and good understanding of serverless architectures.
- RESTFul API Consumption and Development.
- Testing (unit and integration).
- Comfortable with CI, continuous deployment, Git, and project management tools.
- Experience setting up and data mining Analytics databases.
Bonus points if you have experience with TypeScript.
Job Description:-
Infrastructure as Code (IaC): Implement and maintain infrastructure as code using tools like Terraform or AWS CloudFormation, ensuring scalability and reliability of cloud resources.
Continuous Integration/Continuous Deployment (CI/CD): Establish and enhance CI/CD pipelines to automate build, test, and deployment processes, facilitating faster and more reliable releases.
Configuration Management: Manage and automate server configurations using tools like Ansible, Chef, or Puppet, ensuring consistency and compliance across environments.
Monitoring and Logging: Set up and maintain monitoring and logging solutions (e.g., CloudWatch, ELK stack) to proactively identify and troubleshoot issues within the AWS infrastructure.
Security and Compliance: Implement security best practices and compliance standards (e.g., AWS IAM, Security Groups) to protect data and resources. Conduct security audits and vulnerability assessments.
The Successful Applicant
AWS Proficiency: Extensive experience with Amazon Web Services (AWS) and its services, including EC2, S3, RDS, Lambda, and more. AWS Certified DevOps Engineer certification is a plus.
Infrastructure as Code (IaC): Proficiency in writing and maintaining infrastructure as code using tools like Terraform, AWS CloudFormation, or similar technologies.
CI/CD Tools: Strong knowledge of CI/CD tools such as Jenkins, Travis CI, or AWS CodePipeline. Ability to configure and optimize automated build and deployment pipelines.
Containerization: Hands-on experience with containerization technologies like Docker and container orchestration platforms like Kubernetes in AWS.
Scripting and Automation: Proficiency in scripting languages (e.g., Bash, Python) for automating tasks and managing infrastructure.
Monitoring and Logging Tools: Familiarity with monitoring and logging tools such as CloudWatch, ELK stack (Elasticsearch, Logstash, Kibana), Prometheus, Grafana, or Splunk.
Security and Compliance: Strong understanding of AWS security best practices, identity and access management (IAM), and compliance frameworks (e.g., HIPAA, GDPR).
Networking: Knowledge of AWS networking concepts, VPC, and routing. Experience with VPNs, Direct Connect, and other networking technologies is a plus.
Database Administration: Basic knowledge of database administration on AWS, including RDS, DynamoDB, or Aurora.
Collaboration and Communication: Excellent communication and teamwork skills to collaborate with cross-functional teams and convey technical information effectively.
We are a rapidly expanding global technology partner, that is looking for a highly skilled Senior (Python) Data Engineer to join their exceptional Technology and Development team. The role is in Kolkata. If you are passionate about demonstrating your expertise and thrive on collaborating with a group of talented engineers, then this role was made for you!
At the heart of technology innovation, our client specializes in delivering cutting-edge solutions to clients across a wide array of sectors. With a strategic focus on finance, banking, and corporate verticals, they have earned a stellar reputation for their commitment to excellence in every project they undertake.
We are searching for a senior engineer to strengthen their global projects team. They seek an experienced Senior Data Engineer with a strong background in building Extract, Transform, Load (ETL) processes and a deep understanding of AWS serverless cloud environments.
As a vital member of the data engineering team, you will play a critical role in designing, developing, and maintaining data pipelines that facilitate data ingestion, transformation, and storage for our organization.
Your expertise will contribute to the foundation of our data infrastructure, enabling data-driven decision-making and analytics.
Key Responsibilities:
- ETL Pipeline Development: Design, develop, and maintain ETL processes using Python, AWS Glue, or other serverless technologies to ingest data from various sources (databases, APIs, files), transform it into a usable format, and load it into data warehouses or data lakes.
- AWS Serverless Expertise: Leverage AWS services such as AWS Lambda, AWS Step Functions, AWS Glue, AWS S3, and AWS Redshift to build serverless data pipelines that are scalable, reliable, and cost-effective.
- Data Modeling: Collaborate with data scientists and analysts to understand data requirements and design appropriate data models, ensuring data is structured optimally for analytical purposes.
- Data Quality Assurance: Implement data validation and quality checks within ETL pipelines to ensure data accuracy, completeness, and consistency.
- Performance Optimization: Continuously optimize ETL processes for efficiency, performance, and scalability, monitoring and troubleshooting any bottlenecks or issues that may arise.
- Documentation: Maintain comprehensive documentation of ETL processes, data lineage, and system architecture to ensure knowledge sharing and compliance with best practices.
- Security and Compliance: Implement data security measures, encryption, and compliance standards (e.g., GDPR, HIPAA) as required for sensitive data handling.
- Monitoring and Logging: Set up monitoring, alerting, and logging systems to proactively identify and resolve data pipeline issues.
- Collaboration: Work closely with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions.
- Continuous Learning: Stay current with industry trends, emerging technologies, and best practices in data engineering and cloud computing and apply them to enhance existing processes.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
- Proven experience as a Data Engineer with a focus on ETL pipeline development.
- Strong proficiency in Python programming.
- In-depth knowledge of AWS serverless technologies and services.
- Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
- Experience with version control systems (e.g., Git).
- Strong SQL skills for data extraction and transformation.
- Excellent problem-solving and troubleshooting abilities.
- Ability to work independently and collaboratively in a team environment.
- Effective communication skills for articulating technical concepts to non-technical stakeholders.
- Certifications such as AWS Certified Data Analytics - Specialty or AWS Certified DevOps Engineer are a plus.
Preferred Experience:
- Knowledge of data orchestration and workflow management tools
- Familiarity with data visualization tools (e.g., Tableau, Power BI).
- Previous experience in industries with strict data compliance requirements (e.g., insurance, finance) is beneficial.
What You Can Expect:
- Innovation Abounds: Join a company that constantly pushes the boundaries of technology and encourages creative thinking. Your ideas and expertise will be valued and put to work in pioneering solutions.
- Collaborative Excellence: Be part of a team of engineers who are as passionate and skilled as you are. Together, you'll tackle challenging projects, learn from each other, and achieve remarkable results.
- Global Impact: Contribute to projects with a global reach and make a tangible difference. Your work will shape the future of technology in finance, banking, and corporate sectors.
They offer an exciting and professional environment with great career and growth opportunities. Their office is located in the heart of Salt Lake Sector V, offering a terrific workspace that's both accessible and inspiring. Their team members enjoy a professional work environment with regular team outings. Joining the team means becoming part of a vibrant and dynamic team where your skills will be valued, your creativity will be nurtured, and your contributions will make a difference. In this role, you can work alongside some of the brightest minds in the industry.
If you're ready to take your career to the next level and be part of a dynamic team that's driving innovation on a global scale, we want to hear from you.
Apply today for more information about this exciting opportunity.
Onsite Location: Kolkata, India (Salt Lake Sector V)
The desired candidate is required to build and maintain scalable web applications.
- Collaborate with Dev, QA and Data Science teams on environment maintenance, monitoring (ELK, Prometheus or equivalent), deployments and diagnostics
- Administer a hybrid datacenter, including AWS and EC2 cloud assets
- Administer, automate and troubleshoot container based solutions deployed on AWS ECS
- Be able to troubleshoot problems and provide feedback to engineering on issues
- Automate deployment (Ansible, Python), build (Git, Maven. Make, or equivalent) and integration (Jenkins, Nexus) processes
- Learn and administer technologies such as ELK, Hadoop etc.
- A self-starter and enthusiasm to learn and pick up new technologies in a fast-paced environment.
Need to have
- Hands-on Experience in Cloud based DevOps
- Experience working in AWS (EC2, S3, CloudFront, ECR, ECS etc)
- Experience with any programming language.
- Experience using Ansible, Docker, Jenkins, Kubernetes
- Experience in Python.
- Should be very comfortable working in Linux/Unix environment.
- Exposure to Shell Scripting.
- Solid troubleshooting skills
Responsibilities:-
• Communicating with upper management to develop strategic operations goals
• Developing strategic long-range plans to achieve strategic objectives
• Creating and managing the organization’s fiscal operating and capital budget and expenses
• Monitoring operational performance of both internal and external service providers
• Monitoring facility condition and environmental performance and recommending or approving funding levels and spending plans
• Providing a workplace setting that is conducive to productive work
• Monitoring occupant satisfaction
• Monitoring construction and renovation projects
• Monitoring performance metrics
• Receiving and responding to approvals and notifications.
Key Skills:
Technical proficiency, excellent communication skills, strategic planning, staff management

- Work with developers to build out CI/CD pipelines, enable self-service build tools and reusable deployment jobs. Find, explore, and advocate for new technologies for enterprise use.
- Automate the provisioning of environments
- Promote new DevOps tools to simplify the build process and entire Continuous Delivery.
- Manage a Continuous Integration and Deployment environment.
- Coordinate and scale the evolving build and cloud deployment systems across all product development teams.
- Work independently, with, and across teams. Establishing smooth running. environments are paramount to your success, and happiness
- Encourage innovation, implementation of cutting-edge technologies, inclusion, outside-of-the[1]box thinking, teamwork, self-organization, and diversity.
Technical Skills
- Experience with AWS multi-region/multi-AZ deployed systems, auto-scaling of EC2 instances, CloudFormation, ELBs, VPCs, CloudWatch, SNS, SQS, S3, Route53, RDS, IAM roles, security groups, cloud watch
- Experience in Data Visualization and Monitoring tools such as Grafana and Kibana
- Experienced in Build and CI/CD/CT technologies like GitHub, Chef, Artifactory, Hudson/Jenkins
- Experience with log collection, filter creation, and analysis, builds, and performance monitoring/tuning of infrastructure.
- Automate the provisioning of environments pulling strings with Puppet, cooking up some recipes with Chef, or through Ansible, and the deployment of those environments using containers, like Docker or Rocket: (have at least some configuration management tool through some version control).
Qualifications:
- B.E/ B.Tech/ M.C.A in Computer Science, Electronics and Communication Engineering, Electronics and Electrical Engineering.
- Minimum 60% in Graduation and Post-Graduation.
- Good verbal and written communication skills






